Research Article  Open Access
Min Lin, Yiwen Zhong, Juan Lin, Xiaoyu Lin, "Discrete Bird Swarm Algorithm Based on Information Entropy Matrix for Traveling Salesman Problem", Mathematical Problems in Engineering, vol. 2018, Article ID 9461861, 15 pages, 2018. https://doi.org/10.1155/2018/9461861
Discrete Bird Swarm Algorithm Based on Information Entropy Matrix for Traveling Salesman Problem
Abstract
Although bird swarm optimization (BSA) algorithm shows excellent performance in solving continuous optimization problems, it is not an easy task to apply it solving the combination optimization problem such as traveling salesman problem (TSP). Therefore, this paper proposed a novel discrete BSA based on information entropy matrix (DBSA) for TSP. Firstly, in the DBSA algorithm, the information entropy matrix is constructed as a guide for generating new solutions. Each element of the information entropy matrix denotes the information entropy from city i to city j. The higher the information entropy, the larger the probability that a city will be visited. Secondly, each TSP path is represented as an array, and each element of the array represents a city index. Then according to the needs of the minus function proposed in this paper, each TSP path is transformed into a Boolean matrix which represents the relationship of edges. Third, the minus function is designed to evaluate the difference between two Boolean matrices. Based on the minus function and information entropy matrix, the birds’ position updating equations are redesigned to update the information entropy matrix without changing the original features. Then three TSP operators are proposed to generate new solutions according to the updated information entropy matrix. Finally, the performance of DBSA algorithm was tested on a large number of benchmark TSP instances. Experimental results show that DBSA algorithm is better or competitively outperforms many stateoftheart metaheuristic algorithms.
1. Introduction
The traveling salesman problem (TSP) is a classical NPhard problem, which is easily described but difficult to solve, and it is also a simplified form of multiple complex problems in many fields. The aim of TSP is to find the shortest path that visits each city once and then returns to the starting city. For a symmetric TSP, in the case of n cities, any permutation of n cities yields a possibility, i.e., there are (n1)!/2 possible paths. The easiest approach to find an optimal path is to evaluate all the possible paths then chooses the shortest one. But the time complexity required for this algorithm is O(n!). It is means that there is no known polynomial time algorithms which can guarantee to find the global optimal solution. Therefore, many studies have attempted to propose various methods for solving TSP problems within an acceptable time and a widely used one is the metaheuristic algorithms. With the powerful performance and the ability to find acceptable solutions within an affordable time, metaheuristic algorithms have been gradually become an alternative to traditional optimization methods over the past decades.
In recent years, many metaheuristic algorithms have been proposed to solve the TSP, such as ant colony algorithm (ACO) [1, 2], artificial bee colony algorithm (ABC) [3], genetic algorithm (GA) [4], particle swarm optimization (PSO) [5], cuckoo search algorithm (CS) [6, 7], bat algorithm (BA) [8, 9], firefly algorithm (FA) [10], invasive weed optimization [11], bacterial evolutionary algorithm [12], dynamic multiscale region search algorithm (DMRSA) [13], a dual local search algorithm [14], immune algorithm [15], simulated annealing algorithm [16], and some hybrid algorithms [17–20].
Bird swarm algorithm (BSA) is a new metaheuristic algorithm recently proposed by Meng et al. [21] for continuous optimization problems. BSA is based on the swarm intelligence extracted from the social behaviors and social interactions in bird swarms. Compared to some of metaheuristic algorithms such as PSO, BSA has the advantages of fast convergence and high convergence precision. Due to its excellent performance, BSA and its variants have been applied in a wide range of application, such as optimization of benchmark functions [22], edgebased target detection for unmanned aerial vehicles using competitive BSA [23], microgrid multiobjective operation optimization [24], edge cloud computing service composition based on modified BSA [25], power flow problems [26], parameter estimation for chaotic systems using improved BSA [27], an improved particle filter based on BSA [28], etc. However, so far there is no solution for solving TSP. Although the basic BSA algorithm is simple and easy to implement, applying BSA algorithm to solve combinatorial optimization problems such as TSP is not a simple task.
In order to extend the basic principle of BSA algorithm to solve TSP without changing the characteristics of original algorithm, this paper presented a novel discrete bird swarm algorithm based on information entropy matrix (DBSA) to solve the TSP problems. The DBSA algorithm first constructs an information entropy matrix where each element represents the information entropy to select city as next visiting city of city . Each bird of the DBSA algorithm is responsible for a TSP solution represented by an array which stores the visiting sequence of cities, and according to the needs of minus function the solution is converted into a Boolean matrix , where each element represents whether the corresponding edge is in the solution. In DBSA, the minus function is proposed to evaluate the difference between two Boolean matrices. The calculation results of minus function are substituted into the birds’ position update equations to update the information entropy matrix iteratively. Finally, guided by the updated information entropy matrix, birds use three TSP operators to produce new solution. The performance of DBSA algorithm was compared with some of recently published metaheuristic algorithms and some of recently improved classical metaheuristic algorithms on a wide range of benchmark TSP instances.
The remaining sections of this paper are organized as follows: Section 2 provides a short description of the basic BSA algorithm, the goal of TSP and the metaheuristics for the TSP. Section 3 presents our DBSA algorithm. Section 4 compares the performance of DBSA algorithm with some other stateoftheart algorithms on a large number of TSP instances. Finally, in Section 5 we summarize our study.
2. Related Work
This section introduces the principle of BSA algorithm, the TSP, and metaheuristics algorithm for the TSP. Section 2.1 introduces the basic BSA algorithm. Section 2.2 describes the TSP and its goal. Section 2.3 gives a simple survey of stateoftheart metaheuristic algorithms for the TSP.
2.1. The Principle of Bird Swarm Algorithm
BSA is a novel metaheuristic algorithm for solving optimization applications. It mimics the birds’ foraging behavior, vigilance behavior, and flight behavior to solve the global optimization problems. During the process of foraging, each bird searches food according to individual experience and the population’s experience. This behavior can be described mathematically as follows:where denotes the value of the th element of the th solution at the th generation, is a uniform distribution function, is the best previous position for the th element of the th bird, and denotes the th element of global optimal solution. and are two positive numbers, which are called cognitive and social accelerated coefficients, respectively.
When keeping vigilance, each bird would try to move towards the center of the swarm and would inevitably compete with others. The vigilance behavior is shown as follows:where () is a positive integer, which is randomly chosen between 1 and N. a1 and a2 are two positive constants in , denotes the ith bird’s best fitness value, and sumFit represents the sum of the swarms’ best fitness value. ε, which is used to avoid zerodivision error, is the smallest constant in the computer. denotes the jth element of the average position of the whole swarm.
Birds would fly to another location from time to time. When flying to another location, birds may often switch between producing and scrounging. The birds with the highest fitness value would be producers, while the ones with the lowest fitness value would be scroungers. Other birds with fitness values between the highest and lowest fitness values would randomly choose to be producer or scrounger. The flight behaviors of the producers and scroungers can be described mathematically as follows, respectively:where is a Gaussian distribution with mean 0 and standard deviation 1, , . denotes the probability of the scroungers following the producers to search for food. Consider the individual differences, the FL value of each scrounger would randomly select from 0 to 2. The birds switch to flight behavior every FQ time steps. Algorithm 1 describes the implementation of BSA. In Algorithm 1, the parameter N denotes the number of population, M denotes the maximum number of iteration, FQ represents the frequency of birds’ flight behaviors, and P denotes the foraging probability for food.

2.2. Traveling Salesman Problem
TSP is one of the most famous NPhard combinatorial optimization problems. Given N cities and the coordinates of each city, then TSP is to find a loop that contains the shortest path of all N cities. A valid TSP path can be represented as a cyclic permutation , where denotes the index of the ith visiting city and represents the index of the th visiting city. The cost of a permutation (tour) is defined aswhere represents the Euler distance of the two cities. Assuming that the coordinates of the two cities are and , then the distance calculation is as shown in
2.3. Metaheuristic Algorithms for the TSP
In recent years, many metaheuristic algorithms have been proposed for the TSP. Osaba et al. [8] presented an improved discrete bat algorithm which uses hamming distance to measure the distance between bats, and 2opt and 3opt operators are adopted to improve solutions. Saji et al. [9] proposed a novel discrete BA (DBA) where twoexchange crossover operator is used to update solutions and 2opt operator is used to improve solutions. Zhou et al. [11] proposed a discrete invasive weed optimization algorithm (DIWO). DIWO generates a new TSP solution through two local search operators. One is 3opt operators; another is an improved complete 2Opt operator. Ouaarab et al. [6] extended and improved CS (IDCS) by reconstructing its population and introducing a new category of cuckoos so that it can solve combinatorial problems as well as continuous problems. In the IDCS algorithm, the 2opt move method is used for small perturbations, and large perturbations are made by doublebridge move. Zhou et al. [7] proposed a novel discrete CS (DCS) algorithm, which uses learning operator, “A” operator, and 3opt operator to accelerate the convergence. Saraei et al. [10] proposed a FA which uses greedy swap to extend searching area. Zhong et al. [3] presented a hybrid discrete artificial bee colony algorithm (HABC) with threshold acceptance criterion. Applying a new solution updating equation, HABC learn from other bees and the features of problem synchronously.
Except for above recently published metaheuristic algorithms, many researches have been improving the classical metaheuristic algorithms and applying them to solve TSP. Escario et al. [1] proposed an ant colony extended algorithm (ACE) which includes selforganization property. This selforganization property is based on task division and an emergent task distribution according to the feedback provided by the results of ants’ searches. Ismkhan et al. [2] put forward a new ACO algorithm with three effective strategies including pheromone representation with linear space complexity, new next city selection, and pheromone augmented 2opt local search. Zhang et al. [13] proposed a DMRSA algorithm using vitality selection for TSP. In the DMRSA algorithm, vitality selection (VS) is proposed as a new modification scheme based on deleteoldest selection for TSP. The evaluation criterion of individuals in VS is the individualmade progress in the local successive generations. This is different from the pure fitness criterion. Mahi et al. [19] presented a hybrid method, which used PSO algorithm, ACO algorithm, and 3Opt heuristic. The PSO algorithm is used for detecting optimum values of parameters which is adopted for city selection operations in the ACO algorithm. The 3opt operator is used to further improve the best solution produced by ACO. Kóczy et al. [12] presented a discrete bacterial memetic evolutionary algorithm (DBMEA), which is based on the combination of the bacterial evolutionary algorithm and local search techniques. Ouenniche et al. [14] proposed a dual local search framework, that is, a search method that starts with an infeasible solution, explores the dual space, reduces infeasibility iteratively, and lands in the primal space to deliver a feasible solution. Wang [4] improved the GA with two local optimization strategies for TSP. The first local optimization strategy is the four vertices and three lines inequality, which is applied to the local Hamiltonian paths to generate the shorter Hamiltonian circuits (HC). After the HCs are adjusted with the inequality, the second local optimization strategy is executed to reverse the local Hamiltonian paths with more than 2 vertices, which also generates the shorter HCs.
After analyzing the solution updating schemes used in above algorithms for the TSP, we have found it is very important to redesign new position updating equations for the TSP. The equations of original algorithm are suitable to the continuous optimization problems, in order to handle the combinatorial optimization problems, these equations need to be improved or redesigned. In addition, some strategies should be introduced to generate new solution according to the new position updating equations. Guided by these principles, this paper proposes a DBSA algorithm, which not only redesigns the position updating equations for the TSP, but also retains all the characteristics of the original BSA algorithm.
3. Discrete Bird Swarm Algorithm
This section introduces the main ideas of the DBSA algorithms. Section 3.1 explains the concept of information entropy matrix and the construction steps. Section 3.2 describes the representation of solutions. Section 3.3 presents the new position updating equations. Section 3.4 gives the full description of the operators used by birds. Finally, Section 3.5 introduces the implementation steps of DBSA algorithm in detail.
3.1. Information Entropy Matrix
The concept of information entropy was first introduced by Shannon [29]. For the TSP problem, the information entropy of city i to city j is expressed as follows:
The larger value of , the greater possibility of choosing the path of city i to city j. Here represents the probability of city i to city j, and its calculation formula is expressed as follows:where Dist(i,j) represents the distance between city i and city j. Based on the formula for calculating the information entropy, this paper constructs an information entropy matrix to store the information entropy between any two cities. For example, for , the matrix is shown as follows:
3.2. Representation of Solutions
The representation scheme of solutions is simple. Each bird represents a valid TSP path π; each dimension of the bird denotes a city index. For example, for , the bird indicates that the first visiting city is 3, the second visiting city is 2, and so on. In order to incorporate the information entropy matrix, the bird is converted into a Boolean matrix representing the relation of edges. The matrix is shown in (12), where 1 denotes the edge between the two cities which is selected; 0 stands for the edge which is not selected. Figure 1 depicts the conversion steps from the TSP path to the Boolean matrix. For example, the bird_{i} is ; i.e., the TSP path is 32143, then the first edge from city 3 to city 2 is selected in step 1, the second edge from city 2 to city 1 is selected in step 2, and so on.
3.3. Improved Position Updating Equations
The position updating equations of basic BSA are designed for continuous optimization problems. For combinatorial optimization problems such as TSP, these equations should be redesigned to be consistent with the characteristics of problem at hand as well as retain the good features of original algorithm. Firstly, each TSP solution is converted to an edge matrix according to (12), and then a new minus function is introduced to evaluate the difference between two solutions in the DBSA algorithm. For example, suppose that , , and represent the ith bird’s solution, the ith bird’s best solution, and the global best solution, respectively. And their corresponding edge matrices are expressed as follows:
The minus function is shown in (14) and (16), where denotes null set. In our algorithm, is set to 0. Equation (15) describes the calculation method for the subtraction of the corresponding elements of the two matrices in the minus function. Let represent the element of one of matrices in the minus function, and denotes the element of another matrix. When , it means that there is an edge from city i to city j. The result of is shown in (15).
Based on (14) and (16), (1) is converted to (17); (2) is converted to (18), where R denotes . Equations (5) and (6) are converted into (19) and (20), respectively.
3.4. TSP Operators Used by DBSA
After updating the birds’ positions, i.e., after obtaining the new information entropy matrix, how to apply the information entropy matrix to generate new TSP path is an essential task to be solved. Therefore, swap, insert, and reverse operators are used to perturb the old TSP path according to the information entropy matrix to generate a new TSP path. Here this section describes the three operators by taking the information entropy matrix of four cities as an example.
(1) Insert Operator. For , assume that the information entropy matrix is shown in (21), and the ith bird’s solution is . First, randomly select the city 2, i.e., the second row of the matrix. Then randomly select city 1 from the top m cities based on the city's information entropy ranking and insert city 1 behind city 2. The reason why we randomly select a city from the top m cities based on the city's information entropy ranking is that the next visiting city of a city is generally selected from the m cities closer to it. It is represented here as m cities with a large amount of information entropy. Thus, the solution is updated as .
(2) Swap Operator. Similar to the insert operator, for example, the third row of the matrix is randomly selected first and the city 2 in the third row is randomly selected according to its entropy value. Then city 3 is swapped with city 2 to generate a new solution .
(3) Reverse Operator. In the reverse operator, the approach of selecting city is the same as insert and swap operator. The reverse operator refers to the reverse permutation of all cities between two cities. For example, for the , the result of reverse operator between city 1 and city 5 is .
In our DBSA algorithm, the three operators are performed simultaneously, and the optimal operator is selected according to the fitness values. In this way, the optimal solution can be approached more quickly, and the diversity of the solutions is also maintained during the iteration. Although comparing three operators costs extra fitness evaluations, it prevents from falling the local optimum. The detailed implementation process of using the three operators is given in Algorithm 3.
3.5. Implementation of DBSA
Algorithm 2 describes the implementation steps of the DBSA algorithm and Figure 2 draws the flow chart of the DBSA algorithm, where the purpose of perform TSP operators (code (12) and code (22)) in Algorithm 2 is to produce new solution according to the updated information entropy matrix. Algorithm 3 gives the pseudocode of perform TSP operators.


4. Experiments and Discussion
To evaluate the performance of our proposed algorithm, this section compares DBSA algorithm with some of stateoftheart metaheuristic algorithms on a large number of TSP instances. These TSP instances are selected from the TSPLIB standard library, with a city size ranging from 48 to 33810. Section 4.1 gives the explanation of various parameters in detail and analyzes the algorithm’s time complexity. Section 4.2 compares the DBSA algorithm with some of recently published metaheuristic algorithms and Section 4.3 compares it with some of recently improved classic metaheuristic algorithms.
4.1. Parameter Setting and Time Complexity
The DBSA algorithm was implemented with C++ on Visual Studio 2013. The experimental environment was Intel Core 2. 40GHz CPU, 8GB memory, Window 7 OS. Table 1 summarizes the various parameter values used for DBSA. Unless explicitly explained, in all of the following experiments, the maximum iteration number was 2000, the swarm size was 30, and each TSP instance was run 20 times independently.

In addition, in all of the tables below, the column “Best” denotes the best solutions obtained by each algorithm, the column “Worst” represents the worst solutions found, the column “Avg” indicates the average solution length, the column “PA” denotes the percentage error of average solutions, the column “PB” stands for the percentage error of best solutions, and the column “time” shows the average running time in seconds. The “PA” and “PB” values are calculated as follows:
The time complexity of the DBSA algorithm is , where m represents the population size and n represents the number of iterations. Although the time complexity of DBSA algorithm is similar with other algorithms, the actual search performance is affected by the search strategies of each algorithm, so the actual search performances of each algorithm are quite different. In this section, we compare the performance of DBSA with other algorithms on the basis of similar values and compare the running time of DBSA algorithm with some of algorithms based on similar machine performance.
4.2. Compare with Some Recently Published Metaheuristic Algorithms
In order to validate the performance of DBSA algorithm among metaheuristic algorithms, DBSA algorithm is first compared with several newly published metaheuristics, such as DBA [9], IDCS [6], and HABC [3] algorithms. Table 2 gives the comparison results of DBSA with IDCS and DBA algorithms on 41 TSP instances taken from Ouaarab et al. [6] and Saji et al. [9]. The number in the instance title denotes the city size. Among the 41 TSP instances, the minimum size of the city is 51 and the maximum size is 1379. They are all symmetric TSP problems. Each instance was run independently for 20 times. In the IDCS algorithm, the number of cuckoos was 30, the maximum number of iterations was set to 500, and the experiments were conducted on a laptop with Intel Core TM 2 Duo 2.00 GHz CPU and 3 GB of RAM. In the DBA algorithm, the number of bats was 15, the maximum number of iterations was 200, and the experiments were made on a PC with Intel Core 2 Duo 2.1GHZ CPU and 2GB RAM. Among the 41 instances, DBSA algorithm can find the optimal solution on 31 instances, the DBA also found 31 optimal solutions, and IDCS found the optimal solution only on 27 instances. The average PA values for the DBSA, DBA, and IDCS algorithms are 0.18, 0.45, and 0.60, respectively, which means that the DBSA algorithm is the most stable. When the city size is small, the average solutions found by the three algorithms are similar, but when the city size is greater than 150, the PA values, best values, and average values obtained by DBSA algorithm are all superior to IDCS and DBA algorithms.

Table 3 compares the running time of the DBSA algorithm with the IDCS algorithm and the DBA algorithm. The three algorithms run on different machines, but the performances of the machines were similar. Observed from Table 3 that the solving speed of the DBSA algorithm is much faster than the other two algorithms. When the city size reaches 1379, IDCS algorithm takes about one hour, DBA takes about half an hour, and the DBSA algorithm still takes only 15 seconds. The average running time of DBSA, IDCS, and DBA was 1.74, 205.97, and 139.08 second, respectively. Due to the slower solution speed, the city size that IDCS and DBA algorithms can solve is very limited. Moreover, Figures 3, 4, and 5 give the roadmap of instances eil76, eil101, and ch150 obtained by DBSA algorithm, respectively. The points in the roadmaps denote the city index number. These roadmaps further prove the effectiveness of the proposed method.

Table 4 shows the results of the comparison between the DBSA and HABC algorithms. HABC algorithm run on a 2.83 GHz PC with 2GB of RAM, and the performance of the machine is better than DBSA. From the experimental results, DBSA always found 3 optimal solutions, while HABC found 2. The average PA values of DBSA and HABC algorithms are 0.64 and 0.65, respectively. The average runtime of DBSA and HABC is 98.20 and 116.17, respectively. Therefore, the performance of DBSA algorithm is slightly better than HABC algorithm.

4.3. Compare with Some Recently Improved Classical Metaheuristics
In order to further observe the performance of DBSA algorithm and give more credibility of our improvement, DBSA algorithm was compared with several recently improved classical metaheuristic algorithms, such as DBMEA [12], DMRSA [13], ACE [1], and HGA [4] algorithms. Table 5 gives the results of comparison between DBSA and ACE algorithm on 22 TSP instances, where Ry48, Ftv70, Ftv170, and Kro124p are asymmetric TSP problems. ACE is an extended ant colony algorithm. The maximum number of iterations of the ACE algorithm is 400 k, where k denotes the city size, while the maximum number of iterations of the DBSA is 2000, . Among the 22 TSP instances, DBSA and ACE both found 19 optimal solutions. And DBSA always found 2 optimal solutions, while ACE always found 0. The average PA values found by DBSA and ACE are 0.29 and 0.59, respectively. The average PB values of DBSA and ACE are 0.06 and 0.09, respectively. The average solutions obtained by DBSA are always better than ACE, and the worst solutions found by DBSA are also better than ACE. Therefore, the performance of DBSA is significantly better than ACE. Table 6 compares DBSA algorithm with DBMEA algorithm on 14 symmetric VLSI TSP benchmark problems which are taken from [12]. The number of bacteria was 100 in the DBMEA algorithm. From the experimental results, the average PB values of DBSA and DBMEA are 1.43 and 0.36, respectively. The average PA values of DBSA and DBMEA are 1.48 and 0.62, respectively. Therefore, regardless of the optimal solutions or the average solutions, the DBSA algorithm is superior to the DBMEA algorithm.


Table 7 shows the comparison results of the DBSA algorithm and the DMRSA algorithm on 24 TSP instances [13]. The maximum number of iterations of the DBSA algorithm is 2000, but DMRSA uses a local search strategy, which requires more internal iterations. The numbers of optimal solutions found by DMRSA and DBSA algorithms are both 17. The average PB values of DMRSA and DBSA algorithms are 0.37 and 0.06, respectively. The average PA values of DMRSA and DBSA algorithms are 0.65 and 0.22, respectively. When the city size is less than 200, the performance of DBSA and DMRSA is relatively close. When the city size is greater than 200, the performance of DBSA is significantly better than DMRSA. Table 8 compares the running time of DMRSA and DBSA on 10 TSP instances. As can be seen from the table, the average running time of the DMRSA algorithm is 50 times that of the DBSA algorithm. Therefore, the performance of DBSA algorithm is superior to DMRSA algorithm.


Table 9 gives the comparison results between DBSA and HGA algorithms. The HGA algorithm uses two local optimization strategies, so that more iterative times are actually needed. The numbers of optimal solutions found by HGA and DBSA algorithms are 9 and 30, respectively. And DBSA always found 9 optimal solutions, while HGA found 0. The average PB values of DBSA and HGA are 0.01 and 0.74, respectively. The average PA values of DBSA and HGA are 0.10 and 1.05, respectively. So the overall performance of DBSA algorithm outperforms the HGA algorithm.

5. Conclusion and Future Work
BSA algorithm is a novel metaheuristic algorithm inspired from the bird swarms and was first proposed for continuous optimization problems. In order to apply it to solve the combinatorial optimization problem such as TSP, it is necessary to use appropriate strategies to ensure the characteristics of the original algorithm and to design suitable schemes according to different combinatorial optimization problems. Based on these principles, this paper presents a novel discrete BSA with information entropy matrix. Guided by the information entropy matrix, a minus function is introduced to evaluate the difference between two solutions, and the position updating equations of birds are redesigned to update the information entropy matrix. Meanwhile, three TSP operators are introduced to produce new solutions according to the information entropy matrix. Experiment results show that these strategies are very efficient for TSP. The performance of DBSA outperforms significantly many metaheuristic algorithms in most of the cases.
In our future research, we will apply the designing principles and the analysis procedure of proposed DBSA algorithm to guide the design and implementation of other metaheuristic algorithms for other discrete optimization problems.
Data Availability
The TSP data used to support the findings of this study have been deposited in the TSPLIB repository (https://comopt.ifi.uniheidelberg.de/software/TSPLIB95/).
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was supported in part by the project of Natural Science Foundation of Fujian Province (no. 2015J01233), the project of Fujian Provincial Department of Education (no. JAT160143; no. JAT170181), the special Fund for Scientific and Technological Innovation of Fujian Agriculture and Forestry University (no. CXZX2016026; no. CXZX2016031).
References
 J. B. Escario, J. F. Jimenez, and J. M. GironSierra, “Ant colony extended: Experiments on the travelling salesman problem,” Expert Systems with Applications, vol. 42, no. 1, pp. 390–410, 2015. View at: Publisher Site  Google Scholar
 H. Ismkhan, “Effective heuristics for ant colony optimization to handle largescale problems,” Swarm and Evolutionary Computation, vol. 32, pp. 140–149, 2017. View at: Publisher Site  Google Scholar
 Y. Zhong, J. Lin, L. Wang, and H. Zhang, “Hybrid discrete artificial bee colony algorithm with threshold acceptance criterion for traveling salesman problem,” Information Sciences, vol. 421, pp. 70–84, 2017. View at: Publisher Site  Google Scholar  MathSciNet
 Y. Wang, “The hybrid genetic algorithm with two local optimization strategies for traveling salesman problem,” Computers & Industrial Engineering, vol. 70, no. 1, pp. 124–133, 2014. View at: Publisher Site  Google Scholar
 M. A. H. Akhand, S. Akter, M. A. Rashid, and S. B. Yaakob, “Velocity tentative PSO: An optimal velocity implementation based particle swarm optimization to solve traveling salesman problem,” IAENG International Journal of Computer Science (IJCS), vol. 42, no. 3, pp. 1–12, 2015. View at: Google Scholar
 A. Ouaarab, B. Ahiod, and X.S. Yang, “Discrete cuckoo search algorithm for the travelling salesman problem,” Neural Computing and Applications, vol. 24, no. 78, pp. 1659–1669, 2014. View at: Publisher Site  Google Scholar
 Y. Zhou, X. Ouyang, and J. Xie, “A discrete cuckoo search algorithm for travelling salesman problem,” International Journal of Collaborative Intelligence, vol. 1, no. 1, p. 68, 2014. View at: Publisher Site  Google Scholar
 E. Osaba, X.S. Yang, F. Diaz, P. LopezGarcia, and R. Carballedo, “An improved discrete bat algorithm for symmetric and asymmetric Traveling Salesman Problems,” Engineering Applications of Artificial Intelligence, vol. 48, pp. 59–71, 2016. View at: Publisher Site  Google Scholar
 Y. Saji and M. E. Riffi, “A novel discrete bat algorithm for solving the travelling salesman problem,” Neural Computing and Applications, vol. 27, no. 7, pp. 1853–1866, 2016. View at: Publisher Site  Google Scholar
 M. Saraei, R. Analouei, and P. Mansouri, “Solving of travelling salesman problem using firefly algorithm with greedy approach,” Cumhuriyet Science Journal, vol. 36, no. 6, pp. 267–273, 2015. View at: Google Scholar
 Y. Zhou, Q. Luo, H. Chen, A. He, and J. Wu, “A discrete invasive weed optimization algorithm for solving traveling salesman problem,” Neurocomputing, vol. 151, no. 3, pp. 1227–1236, 2015. View at: Publisher Site  Google Scholar
 L. T. Kóczy, P. Földesi, B. TüűSzabó et al., “Enhanced discrete bacterial memetic evolutionary algorithman efficacious metaheuristic for the traveling salesman optimization,” Information Sciences, vol. 460/461, pp. 389–400, 2018. View at: Publisher Site  Google Scholar  MathSciNet
 H. Zhang and J. Zhou, “Dynamic multiscale region search algorithm using vitality selection for traveling salesman problem,” Expert Systems with Applications, vol. 60, pp. 81–95, 2016. View at: Publisher Site  Google Scholar
 J. Ouenniche, P. K. Ramaswamy, and M. Gendreau, “A dual local search framework for combinatorial optimization problems with TSP application,” Journal of the Operational Research Society, vol. 68, no. 11, pp. 1377–1398, 2017. View at: Publisher Site  Google Scholar
 Z. Xu, Y. Wang, S. Li, Y. Liu, Y. Todo, and S. Gao, “Immune algorithm combined with estimation of distribution for traveling salesman problem,” IEEJ Transactions on Electrical and Electronic Engineering, vol. 11, pp. S142–S154, 2016. View at: Google Scholar
 X. Geng, Z. Chen, W. Yang, D. Shi, and K. Zhao, “Solving the traveling salesman problem based on an adaptive simulated annealing algorithm with greedy search,” Applied Soft Computing, vol. 11, no. 4, pp. 3680–3689, 2011. View at: Publisher Site  Google Scholar
 S. Chen and C. Chien, “Solving the traveling salesman problem based on the genetic simulated annealing ant colony system with particle swarm optimization techniques,” Expert Systems with Applications, vol. 38, no. 12, pp. 14439–14450, 2011. View at: Publisher Site  Google Scholar
 W. Deng, R. Chen, B. He, Y. Liu, L. Yin, and J. Guo, “A novel twostage hybrid swarm intelligence optimization algorithm and application,” Soft Computing, vol. 16, no. 10, pp. 1707–1722, 2012. View at: Publisher Site  Google Scholar
 M. Mahi, Ö. K. Baykan, and H. Kodaz, “A new hybrid method based on particle swarm optimization, ant colony optimization and 3Opt algorithms for traveling salesman problem,” Applied Soft Computing, vol. 30, pp. 484–490, 2015. View at: Publisher Site  Google Scholar
 Y. Zhong, J. Lin, L. Wang, and H. Zhang, “Discrete comprehensive learning particle swarm optimization algorithm with Metropolis acceptance criterion for traveling salesman problem,” Swarm and Evolutionary Computation, vol. 42, pp. 77–88, 2018. View at: Google Scholar
 X. B. Meng, X. Z. Gao, L. Lu, Y. Liu, and H. Zhang, “A new bioinspired optimisation algorithm: Bird Swarm Algorithm,” Journal of Experimental & Theoretical Artificial Intelligence, vol. 28, no. 4, pp. 673–687, 2016. View at: Google Scholar
 M. Parashar, S. Rajput, H. M. Dubey, and M. Pandit, “Optimization of benchmark functions using a nature inspired bird swarm algorithm,” in Proceedings of the 2017 3rd International Conference on Computational Intelligence & Communication Technology (CICT), pp. 1–7, Ghaziabad, India, Feburary 2017. View at: Publisher Site  Google Scholar
 X. Wang, Y. Deng, and H. Duan, “Edgebased target detection for unmanned aerial vehicles using competitive Bird Swarm Algorithm,” Aerospace Science and Technology, vol. 78, pp. 708–720, 2018. View at: Publisher Site  Google Scholar
 C. Zeng, C. Peng, K. Wang, Y. Zhang, and M. Zhang, “Multiobjective operation optimization of micro grid based on bird swarm algorithm,” Power System Protection Control, vol. 44, no. 13, pp. 117–122, 2016. View at: Google Scholar
 C. Jian, M. Li, and X. Kuang, “Edge cloud computing service composition based on modified bird swarm optimization in the internet of things,” Cluster Computing, vol. 12, pp. 1–9, 2018. View at: Publisher Site  Google Scholar
 M. Ahmad, N. Javaid, I. A. Niaz, S. Shafiq, O. U. Rehman, and H. M. Hussain, “Application of Bird Swarm Algorithm for Solution of Optimal Power Flow Problems,” in Complex, Intelligent, and Software Intensive Systems, vol. 772 of Advances in Intelligent Systems and Computing, pp. 280–291, Springer International Publishing, Cham, 2019. View at: Publisher Site  Google Scholar
 C. Xu and R. Yang, “Parameter estimation for chaotic systems using improved bird swarm algorithm,” Modern Physics Letters B. Condensed Matter Physics, Statistical Physics, Applied Physics, vol. 31, no. 36, 15 pages, 2017. View at: Publisher Site  Google Scholar  MathSciNet
 L. Zhang, Q. Bao, W. Fan, K. Cui, H. Xu, and Y. Du, “An Improved Particle Filter Based on Bird Swarm Algorithm,” in Proceedings of the 2017 10th International Symposium on Computational Intelligence and Design (ISCID), pp. 198–203, Hangzhou, December 2017. View at: Publisher Site  Google Scholar
 C. E. Shannon, “A mathematical theory of communication,” Bell Labs Technical Journal, vol. 27, pp. 379–423, 1948. View at: Publisher Site  Google Scholar  MathSciNet
Copyright
Copyright © 2018 Min Lin et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.