Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2018 / Article

Research Article | Open Access

Volume 2018 |Article ID 9461861 | https://doi.org/10.1155/2018/9461861

Min Lin, Yiwen Zhong, Juan Lin, Xiaoyu Lin, "Discrete Bird Swarm Algorithm Based on Information Entropy Matrix for Traveling Salesman Problem", Mathematical Problems in Engineering, vol. 2018, Article ID 9461861, 15 pages, 2018. https://doi.org/10.1155/2018/9461861

Discrete Bird Swarm Algorithm Based on Information Entropy Matrix for Traveling Salesman Problem

Academic Editor: Francesco Riganti-Fulginei
Received31 Jul 2018
Revised26 Sep 2018
Accepted17 Oct 2018
Published30 Oct 2018

Abstract

Although bird swarm optimization (BSA) algorithm shows excellent performance in solving continuous optimization problems, it is not an easy task to apply it solving the combination optimization problem such as traveling salesman problem (TSP). Therefore, this paper proposed a novel discrete BSA based on information entropy matrix (DBSA) for TSP. Firstly, in the DBSA algorithm, the information entropy matrix is constructed as a guide for generating new solutions. Each element of the information entropy matrix denotes the information entropy from city i to city j. The higher the information entropy, the larger the probability that a city will be visited. Secondly, each TSP path is represented as an array, and each element of the array represents a city index. Then according to the needs of the minus function proposed in this paper, each TSP path is transformed into a Boolean matrix which represents the relationship of edges. Third, the minus function is designed to evaluate the difference between two Boolean matrices. Based on the minus function and information entropy matrix, the birds’ position updating equations are redesigned to update the information entropy matrix without changing the original features. Then three TSP operators are proposed to generate new solutions according to the updated information entropy matrix. Finally, the performance of DBSA algorithm was tested on a large number of benchmark TSP instances. Experimental results show that DBSA algorithm is better or competitively outperforms many state-of-the-art metaheuristic algorithms.

1. Introduction

The traveling salesman problem (TSP) is a classical NP-hard problem, which is easily described but difficult to solve, and it is also a simplified form of multiple complex problems in many fields. The aim of TSP is to find the shortest path that visits each city once and then returns to the starting city. For a symmetric TSP, in the case of n cities, any permutation of n cities yields a possibility, i.e., there are (n-1)!/2 possible paths. The easiest approach to find an optimal path is to evaluate all the possible paths then chooses the shortest one. But the time complexity required for this algorithm is O(n!). It is means that there is no known polynomial time algorithms which can guarantee to find the global optimal solution. Therefore, many studies have attempted to propose various methods for solving TSP problems within an acceptable time and a widely used one is the metaheuristic algorithms. With the powerful performance and the ability to find acceptable solutions within an affordable time, metaheuristic algorithms have been gradually become an alternative to traditional optimization methods over the past decades.

In recent years, many metaheuristic algorithms have been proposed to solve the TSP, such as ant colony algorithm (ACO) [1, 2], artificial bee colony algorithm (ABC) [3], genetic algorithm (GA) [4], particle swarm optimization (PSO) [5], cuckoo search algorithm (CS) [6, 7], bat algorithm (BA) [8, 9], firefly algorithm (FA) [10], invasive weed optimization [11], bacterial evolutionary algorithm [12], dynamic multiscale region search algorithm (DMRSA) [13], a dual local search algorithm [14], immune algorithm [15], simulated annealing algorithm [16], and some hybrid algorithms [1720].

Bird swarm algorithm (BSA) is a new metaheuristic algorithm recently proposed by Meng et al. [21] for continuous optimization problems. BSA is based on the swarm intelligence extracted from the social behaviors and social interactions in bird swarms. Compared to some of metaheuristic algorithms such as PSO, BSA has the advantages of fast convergence and high convergence precision. Due to its excellent performance, BSA and its variants have been applied in a wide range of application, such as optimization of benchmark functions [22], edge-based target detection for unmanned aerial vehicles using competitive BSA [23], microgrid multiobjective operation optimization [24], edge cloud computing service composition based on modified BSA [25], power flow problems [26], parameter estimation for chaotic systems using improved BSA [27], an improved particle filter based on BSA [28], etc. However, so far there is no solution for solving TSP. Although the basic BSA algorithm is simple and easy to implement, applying BSA algorithm to solve combinatorial optimization problems such as TSP is not a simple task.

In order to extend the basic principle of BSA algorithm to solve TSP without changing the characteristics of original algorithm, this paper presented a novel discrete bird swarm algorithm based on information entropy matrix (DBSA) to solve the TSP problems. The DBSA algorithm first constructs an information entropy matrix where each element represents the information entropy to select city as next visiting city of city . Each bird of the DBSA algorithm is responsible for a TSP solution represented by an array which stores the visiting sequence of cities, and according to the needs of minus function the solution is converted into a Boolean matrix , where each element represents whether the corresponding edge is in the solution. In DBSA, the minus function is proposed to evaluate the difference between two Boolean matrices. The calculation results of minus function are substituted into the birds’ position update equations to update the information entropy matrix iteratively. Finally, guided by the updated information entropy matrix, birds use three TSP operators to produce new solution. The performance of DBSA algorithm was compared with some of recently published metaheuristic algorithms and some of recently improved classical metaheuristic algorithms on a wide range of benchmark TSP instances.

The remaining sections of this paper are organized as follows: Section 2 provides a short description of the basic BSA algorithm, the goal of TSP and the metaheuristics for the TSP. Section 3 presents our DBSA algorithm. Section 4 compares the performance of DBSA algorithm with some other state-of-the-art algorithms on a large number of TSP instances. Finally, in Section 5 we summarize our study.

This section introduces the principle of BSA algorithm, the TSP, and metaheuristics algorithm for the TSP. Section 2.1 introduces the basic BSA algorithm. Section 2.2 describes the TSP and its goal. Section 2.3 gives a simple survey of state-of-the-art metaheuristic algorithms for the TSP.

2.1. The Principle of Bird Swarm Algorithm

BSA is a novel metaheuristic algorithm for solving optimization applications. It mimics the birds’ foraging behavior, vigilance behavior, and flight behavior to solve the global optimization problems. During the process of foraging, each bird searches food according to individual experience and the population’s experience. This behavior can be described mathematically as follows:where denotes the value of the -th element of the -th solution at the -th generation, is a uniform distribution function, is the best previous position for the -th element of the -th bird, and denotes the -th element of global optimal solution. and are two positive numbers, which are called cognitive and social accelerated coefficients, respectively.

When keeping vigilance, each bird would try to move towards the center of the swarm and would inevitably compete with others. The vigilance behavior is shown as follows:where () is a positive integer, which is randomly chosen between 1 and N. a1 and a2 are two positive constants in , denotes the i-th bird’s best fitness value, and sumFit represents the sum of the swarms’ best fitness value. ε, which is used to avoid zero-division error, is the smallest constant in the computer. denotes the j-th element of the average position of the whole swarm.

Birds would fly to another location from time to time. When flying to another location, birds may often switch between producing and scrounging. The birds with the highest fitness value would be producers, while the ones with the lowest fitness value would be scroungers. Other birds with fitness values between the highest and lowest fitness values would randomly choose to be producer or scrounger. The flight behaviors of the producers and scroungers can be described mathematically as follows, respectively:where is a Gaussian distribution with mean 0 and standard deviation 1, , . denotes the probability of the scroungers following the producers to search for food. Consider the individual differences, the FL value of each scrounger would randomly select from 0 to 2. The birds switch to flight behavior every FQ time steps. Algorithm 1 describes the implementation of BSA. In Algorithm 1, the parameter N denotes the number of population, M denotes the maximum number of iteration, FQ represents the frequency of birds’ flight behaviors, and P denotes the foraging probability for food.

(1) Initialize the parameter values of N, M, FQ, P;
(2) Initialize a population of birds and evaluate the individuals’ fitness value.
(3) While
(4) If ( %)
(5) For to
(6) If rand
(7) Birds forage for food using Eq. (1)
(8) Else
(9) Birds keep vigilance using Eq. (2)
(10) End if
(11) End for
(12) Else
(13) Divide the swarm into two parts: producers and scroungers.
(14) For to
(15) If (==producer)
(16) Birds flight using Eq. (5) //producer
(17) Else
(18) Birds flight using Eq. (6) //scrounger
(19) End if
(20) End for
(21) End if
(22) Evaluate the fitness value of the new solution;
(23) If
(24) = ;
(25) Update the global optimal solution;
(26) t++
(27) End While
(28) Output the global optimal solution;
2.2. Traveling Salesman Problem

TSP is one of the most famous NP-hard combinatorial optimization problems. Given N cities and the coordinates of each city, then TSP is to find a loop that contains the shortest path of all N cities. A valid TSP path can be represented as a cyclic permutation , where denotes the index of the i-th visiting city and represents the index of the -th visiting city. The cost of a permutation (tour) is defined aswhere represents the Euler distance of the two cities. Assuming that the coordinates of the two cities are and , then the distance calculation is as shown in

2.3. Metaheuristic Algorithms for the TSP

In recent years, many metaheuristic algorithms have been proposed for the TSP. Osaba et al. [8] presented an improved discrete bat algorithm which uses hamming distance to measure the distance between bats, and 2-opt and 3-opt operators are adopted to improve solutions. Saji et al. [9] proposed a novel discrete BA (DBA) where two-exchange crossover operator is used to update solutions and 2-opt operator is used to improve solutions. Zhou et al. [11] proposed a discrete invasive weed optimization algorithm (DIWO). DIWO generates a new TSP solution through two local search operators. One is 3-opt operators; another is an improved complete 2-Opt operator. Ouaarab et al. [6] extended and improved CS (IDCS) by reconstructing its population and introducing a new category of cuckoos so that it can solve combinatorial problems as well as continuous problems. In the IDCS algorithm, the 2-opt move method is used for small perturbations, and large perturbations are made by double-bridge move. Zhou et al. [7] proposed a novel discrete CS (DCS) algorithm, which uses learning operator, “A” operator, and 3-opt operator to accelerate the convergence. Saraei et al. [10] proposed a FA which uses greedy swap to extend searching area. Zhong et al. [3] presented a hybrid discrete artificial bee colony algorithm (HABC) with threshold acceptance criterion. Applying a new solution updating equation, HABC learn from other bees and the features of problem synchronously.

Except for above recently published metaheuristic algorithms, many researches have been improving the classical metaheuristic algorithms and applying them to solve TSP. Escario et al. [1] proposed an ant colony extended algorithm (ACE) which includes self-organization property. This self-organization property is based on task division and an emergent task distribution according to the feedback provided by the results of ants’ searches. Ismkhan et al. [2] put forward a new ACO algorithm with three effective strategies including pheromone representation with linear space complexity, new next city selection, and pheromone augmented 2-opt local search. Zhang et al. [13] proposed a DMRSA algorithm using vitality selection for TSP. In the DMRSA algorithm, vitality selection (VS) is proposed as a new modification scheme based on delete-oldest selection for TSP. The evaluation criterion of individuals in VS is the individual-made progress in the local successive generations. This is different from the pure fitness criterion. Mahi et al. [19] presented a hybrid method, which used PSO algorithm, ACO algorithm, and 3-Opt heuristic. The PSO algorithm is used for detecting optimum values of parameters which is adopted for city selection operations in the ACO algorithm. The 3-opt operator is used to further improve the best solution produced by ACO. Kóczy et al. [12] presented a discrete bacterial memetic evolutionary algorithm (DBMEA), which is based on the combination of the bacterial evolutionary algorithm and local search techniques. Ouenniche et al. [14] proposed a dual local search framework, that is, a search method that starts with an infeasible solution, explores the dual space, reduces infeasibility iteratively, and lands in the primal space to deliver a feasible solution. Wang [4] improved the GA with two local optimization strategies for TSP. The first local optimization strategy is the four vertices and three lines inequality, which is applied to the local Hamiltonian paths to generate the shorter Hamiltonian circuits (HC). After the HCs are adjusted with the inequality, the second local optimization strategy is executed to reverse the local Hamiltonian paths with more than 2 vertices, which also generates the shorter HCs.

After analyzing the solution updating schemes used in above algorithms for the TSP, we have found it is very important to redesign new position updating equations for the TSP. The equations of original algorithm are suitable to the continuous optimization problems, in order to handle the combinatorial optimization problems, these equations need to be improved or redesigned. In addition, some strategies should be introduced to generate new solution according to the new position updating equations. Guided by these principles, this paper proposes a DBSA algorithm, which not only redesigns the position updating equations for the TSP, but also retains all the characteristics of the original BSA algorithm.

3. Discrete Bird Swarm Algorithm

This section introduces the main ideas of the DBSA algorithms. Section 3.1 explains the concept of information entropy matrix and the construction steps. Section 3.2 describes the representation of solutions. Section 3.3 presents the new position updating equations. Section 3.4 gives the full description of the operators used by birds. Finally, Section 3.5 introduces the implementation steps of DBSA algorithm in detail.

3.1. Information Entropy Matrix

The concept of information entropy was first introduced by Shannon [29]. For the TSP problem, the information entropy of city i to city j is expressed as follows:

The larger value of , the greater possibility of choosing the path of city i to city j. Here represents the probability of city i to city j, and its calculation formula is expressed as follows:where Dist(i,j) represents the distance between city i and city j. Based on the formula for calculating the information entropy, this paper constructs an information entropy matrix to store the information entropy between any two cities. For example, for , the matrix is shown as follows:

3.2. Representation of Solutions

The representation scheme of solutions is simple. Each bird represents a valid TSP path π; each dimension of the bird denotes a city index. For example, for , the bird indicates that the first visiting city is 3, the second visiting city is 2, and so on. In order to incorporate the information entropy matrix, the bird is converted into a Boolean matrix representing the relation of edges. The matrix is shown in (12), where 1 denotes the edge between the two cities which is selected; 0 stands for the edge which is not selected. Figure 1 depicts the conversion steps from the TSP path to the Boolean matrix. For example, the birdi is ; i.e., the TSP path is 3-2-1-4-3, then the first edge from city 3 to city 2 is selected in step 1, the second edge from city 2 to city 1 is selected in step 2, and so on.

3.3. Improved Position Updating Equations

The position updating equations of basic BSA are designed for continuous optimization problems. For combinatorial optimization problems such as TSP, these equations should be redesigned to be consistent with the characteristics of problem at hand as well as retain the good features of original algorithm. Firstly, each TSP solution is converted to an edge matrix according to (12), and then a new minus function is introduced to evaluate the difference between two solutions in the DBSA algorithm. For example, suppose that , , and represent the i-th bird’s solution, the i-th bird’s best solution, and the global best solution, respectively. And their corresponding edge matrices are expressed as follows:

The minus function is shown in (14) and (16), where denotes null set. In our algorithm, is set to 0. Equation (15) describes the calculation method for the subtraction of the corresponding elements of the two matrices in the minus function. Let represent the element of one of matrices in the minus function, and denotes the element of another matrix. When , it means that there is an edge from city i to city j. The result of is shown in (15).

Based on (14) and (16), (1) is converted to (17); (2) is converted to (18), where R denotes . Equations (5) and (6) are converted into (19) and (20), respectively.

3.4. TSP Operators Used by DBSA

After updating the birds’ positions, i.e., after obtaining the new information entropy matrix, how to apply the information entropy matrix to generate new TSP path is an essential task to be solved. Therefore, swap, insert, and reverse operators are used to perturb the old TSP path according to the information entropy matrix to generate a new TSP path. Here this section describes the three operators by taking the information entropy matrix of four cities as an example.

(1) Insert Operator. For , assume that the information entropy matrix is shown in (21), and the i-th bird’s solution is . First, randomly select the city 2, i.e., the second row of the matrix. Then randomly select city 1 from the top m cities based on the city's information entropy ranking and insert city 1 behind city 2. The reason why we randomly select a city from the top m cities based on the city's information entropy ranking is that the next visiting city of a city is generally selected from the m cities closer to it. It is represented here as m cities with a large amount of information entropy. Thus, the solution is updated as .

(2) Swap Operator. Similar to the insert operator, for example, the third row of the matrix is randomly selected first and the city 2 in the third row is randomly selected according to its entropy value. Then city 3 is swapped with city 2 to generate a new solution .

(3) Reverse Operator. In the reverse operator, the approach of selecting city is the same as insert and swap operator. The reverse operator refers to the reverse permutation of all cities between two cities. For example, for the , the result of reverse operator between city 1 and city 5 is .

In our DBSA algorithm, the three operators are performed simultaneously, and the optimal operator is selected according to the fitness values. In this way, the optimal solution can be approached more quickly, and the diversity of the solutions is also maintained during the iteration. Although comparing three operators costs extra fitness evaluations, it prevents from falling the local optimum. The detailed implementation process of using the three operators is given in Algorithm 3.

3.5. Implementation of DBSA

Algorithm 2 describes the implementation steps of the DBSA algorithm and Figure 2 draws the flow chart of the DBSA algorithm, where the purpose of perform TSP operators (code (12) and code (22)) in Algorithm 2 is to produce new solution according to the updated information entropy matrix. Algorithm 3 gives the pseudocode of perform TSP operators.

(1) Read the TSP library and constructs the information entropy matrix;
(2) Initialize the values of parameter N, M, FQ, P;
(3) Initialize a population of N birds; each bird deals with a TSP path; evaluates the
length of TSP paths, i.e., the fitness values of birds;
(4) While
(5) If (t %) //foraging or keep vigilance
(6) For to N
(7) If rand
(8) Birds forage for food using Eq. (17)
(9) Else
(10) Birds keep vigilance using Eq. (18)
(11) End if
(12) Perform operator;
(13) End for
(14) Else //flying
(15) Divide the swarm into two parts: producers and scroungers.
(16) For to N
(17) If (i==producer)
(18) Birds flight using Eq. (19)
(19) Else
(20) Birds flight using Eq. (20)
(21) End if
(22) Perform operator;
(23) End for
(24) End if
(25) t++
(26) End While
(27) Output the global optimal solution;
(1) Randomly select city ci, and find its corresponding row in the information entropy matrix;
(2) Randomly select another city nci from the top m cities based on the values of information entropy;
(3) Perform reverse operator on city ci and city nci, and evaluate its fitness value rvslength;
(4) Perform swap operator on city ci and city nci, and evaluate its fitness value swplength;
(5) Perform insert operator on city ci and city nci, and evaluate its fitness value inslength;
(6) Select the best fitness value as the new solution from the three scheme;
(7) If
(8)
(9) Update the global best solution;

4. Experiments and Discussion

To evaluate the performance of our proposed algorithm, this section compares DBSA algorithm with some of state-of-the-art metaheuristic algorithms on a large number of TSP instances. These TSP instances are selected from the TSPLIB standard library, with a city size ranging from 48 to 33810. Section 4.1 gives the explanation of various parameters in detail and analyzes the algorithm’s time complexity. Section 4.2 compares the DBSA algorithm with some of recently published metaheuristic algorithms and Section 4.3 compares it with some of recently improved classic metaheuristic algorithms.

4.1. Parameter Setting and Time Complexity

The DBSA algorithm was implemented with C++ on Visual Studio 2013. The experimental environment was Intel Core 2. 40GHz CPU, 8GB memory, Window 7 OS. Table 1 summarizes the various parameter values used for DBSA. Unless explicitly explained, in all of the following experiments, the maximum iteration number was 2000, the swarm size was 30, and each TSP instance was run 20 times independently.


ParameterValueParameter meaning

N30The number of birds
M2000The maximum iteration number
PThe probability of foraging for food
FQ3The frequency of birds’ flight behaviours

In addition, in all of the tables below, the column “Best” denotes the best solutions obtained by each algorithm, the column “Worst” represents the worst solutions found, the column “Avg” indicates the average solution length, the column “PA” denotes the percentage error of average solutions, the column “PB” stands for the percentage error of best solutions, and the column “time” shows the average running time in seconds. The “PA” and “PB” values are calculated as follows:

The time complexity of the DBSA algorithm is , where m represents the population size and n represents the number of iterations. Although the time complexity of DBSA algorithm is similar with other algorithms, the actual search performance is affected by the search strategies of each algorithm, so the actual search performances of each algorithm are quite different. In this section, we compare the performance of DBSA with other algorithms on the basis of similar values and compare the running time of DBSA algorithm with some of algorithms based on similar machine performance.

4.2. Compare with Some Recently Published Metaheuristic Algorithms

In order to validate the performance of DBSA algorithm among metaheuristic algorithms, DBSA algorithm is first compared with several newly published metaheuristics, such as DBA [9], IDCS [6], and HABC [3] algorithms. Table 2 gives the comparison results of DBSA with IDCS and DBA algorithms on 41 TSP instances taken from Ouaarab et al. [6] and Saji et al. [9]. The number in the instance title denotes the city size. Among the 41 TSP instances, the minimum size of the city is 51 and the maximum size is 1379. They are all symmetric TSP problems. Each instance was run independently for 20 times. In the IDCS algorithm, the number of cuckoos was 30, the maximum number of iterations was set to 500, and the experiments were conducted on a laptop with Intel Core TM 2 Duo 2.00 GHz CPU and 3 GB of RAM. In the DBA algorithm, the number of bats was 15, the maximum number of iterations was 200, and the experiments were made on a PC with Intel Core 2 Duo 2.1GHZ CPU and 2GB RAM. Among the 41 instances, DBSA algorithm can find the optimal solution on 31 instances, the DBA also found 31 optimal solutions, and IDCS found the optimal solution only on 27 instances. The average PA values for the DBSA, DBA, and IDCS algorithms are 0.18, 0.45, and 0.60, respectively, which means that the DBSA algorithm is the most stable. When the city size is small, the average solutions found by the three algorithms are similar, but when the city size is greater than 150, the PA values, best values, and average values obtained by DBSA algorithm are all superior to IDCS and DBA algorithms.


No.InstanceOptDBAIDCSDBSA
BestAvgPBPABestAvgPBPABestAvgPBPA

1eil514264264260.000.004264260.000.00426426.20.00 0.05
2berlin527542754275420.000.00754275420.000.00754275420.00 0.00
3st706756756750.000.006756750.000.00675675.250.00 0.04
4eil76538538538.760.000.14538538.030.000.00108159108293.70.00 0.12
5pr76108,159108,159108,1590.000.001081591081590.000.005385380.00 0.00
6kroA10021,28221,28221,2820.000.0021282212820.000.002128221286.60.00 0.02
7kroB10022,14122,14122,1410.000.002214122141.530.000.002214122215.70.00 0.34
8kroC10020,74920,74920,753.360.000.0220749207490.000.0020749207490.00 0.00
9kroD10021,29421,29421,303.500.000.042129421304.330.000.042129421302.750.00 0.04
10kroE10022,06822,06822,080.760.000.052206822081.260.000.062206822089.050.00 0.10
11eil101629629632.430.000.54629630.430.000.226296290.00 0.00
12lin10514,37914,37914,3790.000.0014379143790.000.0014379143790.00 0.00
13pr10744,30344,30344,360.80.000.134430344307.060.000.004430344368.350.00 0.15
14pr12459,03059,03059,037.660.000.01259030590300.000.005903059035.70.00 0.01
15bier127118,282118,282118,385.660.000.08118282118359.630.000.06118282118302.90.00 0.02
16ch130611061106124.10.000.2361106135.960.000.42611061230.00 0.21
17pr13696,77296,77296,9950.000.239679097009.260.010.249677296817.70.00 0.05
18pr14458,53758,53758,5370.000.0058537585370.000.005853758552.90.00 0.03
19ch150652865286550.30.000.3465286549.90.000.3365286538.60.00 0.16
20kroA15026,52426,52426,560.20.000.132652426569.260.000.172652426534.10.00 0.04
21kroB15026,13026,13026,146.630.000.062613026159.30.000.112613026137.250.00 0.03
22pr15273,68273,68273,759.060.000.1073682736820.000.0073682737160.00 0.05
23rat195232323242340.70.040.7623242341.860.040.8123282330.050.220.30
24d19815,78015,78015,802.830.000.141578115807.660.000.171578015788.050.00 0.05
25kroA20029,36829,36829,449.230.000.272938229446.660.040.262936829392.050.00 0.08
26kroB20029,43729,43929,527.40.000.302944829542.490.030.292943729451.150.00 0.05
27ts225126,643126,643126,6430.000.00126643126659.230.000.011266431266430.00 0.00
28tsp225391639163944.80.000.7339163958.760.001.0939193932.90.080.43
29pr22680,36980,36980,409.10.000.048036980386.660.000.028036980404.40.00 0.04
30gil262237823802390.70.080.5323822394.50.160.6823782379.60.00 0.07
31pr26449,13549,13549,167.90.000.064913549257.50.000.2449135491350.00 0.00
32a280257925792611,0.000.3025792592.330.000.51257925790.00 0.00
33pr29948,19148,19148,311.70.000.254820748470.530.030.584819148241.550.00 0.10
34lin31842,02942,15442,462.160.291.034212542434.730.220.964206142209.70.080.43
35rd40015,28115,33615,465.30.351.201544715533.731.081.651530115329.750.130.32
36fl41711,86111,86511,884.10.030.191187311910.530.10.411187811929.450.140.58
37pr439107,217107,291107,683.330.060.43107447107960.50.210.69107285107369.60.060.14
38rat575677368626903.831.311.9368966956.731.812.7168106828.10.550.81
39rat783880689489010.41.612.3290439109.262.693.4488368873.90.340.77
40pr1002259,045266,146266,412.82.742.84266508268630.032.883.70260687261339.90.630.89
41nrw137956,63858,18858,2992.732.935895159349.534.084.785702457126.20.680.86

Table 3 compares the running time of the DBSA algorithm with the IDCS algorithm and the DBA algorithm. The three algorithms run on different machines, but the performances of the machines were similar. Observed from Table 3 that the solving speed of the DBSA algorithm is much faster than the other two algorithms. When the city size reaches 1379, IDCS algorithm takes about one hour, DBA takes about half an hour, and the DBSA algorithm still takes only 15 seconds. The average running time of DBSA, IDCS, and DBA was 1.74, 205.97, and 139.08 second, respectively. Due to the slower solution speed, the city size that IDCS and DBA algorithms can solve is very limited. Moreover, Figures 3, 4, and 5 give the roadmap of instances eil76, eil101, and ch150 obtained by DBSA algorithm, respectively. The points in the roadmaps denote the city index number. These roadmaps further prove the effectiveness of the proposed method.


NoInstanceDBAIDCSDBSA

1Eil510.201.160.16
2Berlin520.030.090.02
3St700.431.560.21
4Pr760.574.730.11
5Eil761.546.540.10
6KroA1001.362.700.21
7KroB1003.358.740.49
8KroC1002.513.360.21
9KroD1007.558.350.38
10KroE10011.1214.180.53
11Eil10117.0918.740.31
12Lin1052.275.010.25
13Pr10718.0112.890.43
14Pr1242.573.360.17
15Bier12719.1425.500.50
16Ch13013.6823.120.51
17Pr13622.1035.820.69
18Pr1442.122.960.58
19Ch15025.7027.740.79
20KroA15021.7531.230.78
21KroB15022.1733.010.81
22Pr15215.2414.860.64
23Rat19542.3057.251.06
24D19838.7559.951.16
25KroA20046.9762.081.12
26KroB20053.1064.061.13
27Ts22518.2447.510.55
28Tsp22580.6176.161.30
29Pr22644.8950.001.04
30Gil26281.25102.391.58
31Pr26464.5182.931.37
32A28028.6115.571.37
33Pr299102.64138.201.84
34Lin318120.14156.172.04
35Rd400194.11264.942.81
36Fl417112.36274.593.75
37Pr439223.09308.753.22
38Rat575423.56506.674.07
39Rat783758.49968.666.52
40Rr10021195.201662.619.94
41Nrw13791863.123160.4716.52

Table 4 shows the results of the comparison between the DBSA and HABC algorithms. HABC algorithm run on a 2.83 GHz PC with 2GB of RAM, and the performance of the machine is better than DBSA. From the experimental results, DBSA always found 3 optimal solutions, while HABC found 2. The average PA values of DBSA and HABC algorithms are 0.64 and 0.65, respectively. The average runtime of DBSA and HABC is 98.20 and 116.17, respectively. Therefore, the performance of DBSA algorithm is slightly better than HABC algorithm.


NoInstanceOptHABCDBSA
PATimePATime

1Ch15065280.311.860.201.19
2Kroa15026,5240.051.880.021.21
3Krob15026,130-0.011.820.061.21
4Pr15273,6820.001.890.100.87
5U15942,080-0.012.010.00 0.44
6Rat19523230.612.180.321.88
7D19815,7800.272.280.051.80
8Kroa20029,3680.052.40.061.49
9Krob20029,4370.022.310.021.41
10Ts225126,6430.002.780.00 1.11
11Pr22680,3690.002.840.031.72
12Gil26223780.383.420.072.33
13Pr26449,1350.002.850.00 1.29
14Pr29948,1910.113.70.072.67
15Lin31842,0290.263.520.403.10
16Rd400152810.264.980.284.23
17Fl41711,8611.015.610.614.75
18Pr439107,2170.225.680.144.53
19Pcb44250,7780.155.930.334.56
20U57436,9050.378.850.836.05
21Rat57567730.758.830.776.08
22U72441,9100.3313.430.628.82
23Rat78388060.9115.290.768.80
24Pr1002259,0450.7111.190.8312.54
25Pcb117356,8920.7714.670.8814.66
26D129150,8011.6414.320.7916.01
27Rl1323270,1990.515.290.6917.23
28Fl140020,1271.2918.161.7632.46
29D165562,1281.2821.281.0122.31
30Vm1748336,5560.7225.210.6726.19
31U2319234,2560.2625.540.3833.22
32Pcb3038137,6941.0340.420.8445.06
33Fnl4461182,5661.3044.211.1489.86
34Rl5934556,0451.7963.741.09135.28
35Pla739723,260,7281.47108.342.38300.23
36Usa13509199,82,8591.57434.041.84563.02
37Brd14051468,3851.67452.871.54343.29
38D18512645,2381.51816.381.22446.58
39Pla3381066,048,9451.812318.82.261660.20

4.3. Compare with Some Recently Improved Classical Metaheuristics

In order to further observe the performance of DBSA algorithm and give more credibility of our improvement, DBSA algorithm was compared with several recently improved classical metaheuristic algorithms, such as DBMEA [12], DMRSA [13], ACE [1], and HGA [4] algorithms. Table 5 gives the results of comparison between DBSA and ACE algorithm on 22 TSP instances, where Ry48, Ftv70, Ftv170, and Kro124p are asymmetric TSP problems. ACE is an extended ant colony algorithm. The maximum number of iterations of the ACE algorithm is 400 k, where k denotes the city size, while the maximum number of iterations of the DBSA is 2000, . Among the 22 TSP instances, DBSA and ACE both found 19 optimal solutions. And DBSA always found 2 optimal solutions, while ACE always found 0. The average PA values found by DBSA and ACE are 0.29 and 0.59, respectively. The average PB values of DBSA and ACE are 0.06 and 0.09, respectively. The average solutions obtained by DBSA are always better than ACE, and the worst solutions found by DBSA are also better than ACE. Therefore, the performance of DBSA is significantly better than ACE. Table 6 compares DBSA algorithm with DBMEA algorithm on 14 symmetric VLSI TSP benchmark problems which are taken from [12]. The number of bacteria was 100 in the DBMEA algorithm. From the experimental results, the average PB values of DBSA and DBMEA are 1.43 and 0.36, respectively. The average PA values of DBSA and DBMEA are 1.48 and 0.62, respectively. Therefore, regardless of the optimal solutions or the average solutions, the DBSA algorithm is superior to the DBMEA algorithm.


No.InstanceOptACEDBSA
BestWorstAvgPBPABestWorstAvgPBPA

1Ry4814422144221488314495.80.000.511442214673 14521.650.000.69
2Eil51426426432426.8180.000.19426427426.40.000.09
3Berlin527542754277157543.040.000.017542754275420.000.00
4Ftv701950195020601968.160.000.93195020131957.450.00 0.38
5St70675675691676.4180.000.21675684678.90.000.58
6Eil76538538546538.3110.000.065385385380.000.00
7Pr761081591081591090851082510.000.09108159109186108210.40.000.05
8Rat991211121112461213.290.000.19121112121211.70.000.06
9Kroa10021282212822160021298.60.000.08212822130521285.450.000.02
10Eil101629629644633.6190.000.73629631629.350.000.06
11Lin10514379143791451414385.50.000.05143791440114384.50.000.04
12Kro124p36230362303777736460.80.000.64362303645936256.90.000.07
13Ch1306110611062596153.960.000.72611061606127.90.000.29
14Ch15065286528667065500.000.34654965646555.70.320.42
15Pr15273682736827480273766.80.000.12736827428773885.20.000.28
16U15942080420804381642199.80.000.28420804270442377.450.000.71
17Ftv1702755275530072824.080.002.51275527872768.750.00 0.50
18D19815780157801589915813.30.000.21157801581815798.80.000.12
19Gil2622378237824132390.060.000.51237823902382.60.000.19
20D49335002351233577035449.30.351.28351063533235200.450.30.57
21Rat7838806886890108936.70.701.48883688828855.20.340.56
22Fl140020127203282106720491.71.001.81202052037920273.650.390.73


NoInstancesOptDBMEADBSA
PBPAPBPA

1xql66225131.511.510.080.35
2dkg81331991.391.390.130.42
3dka137646661.561.650.320.65
4dca138950851.401.440.350.69
5dja143652571.431.500.170.46
6icw148344161.161.270.180.38
7rbv158353871.101.170.450.62
8rby159955331.011.070.310.63
9dea238280171.711.800.210.41
10pds256676431.731.810.310.65
11bch276282341.731.780.520.80
12fdp3256100080.961.010.590.81
13dkc3938125031.701.760.720.89
14xqd4966153161.561.600.710.88

Table 7 shows the comparison results of the DBSA algorithm and the DMRSA algorithm on 24 TSP instances [13]. The maximum number of iterations of the DBSA algorithm is 2000, but DMRSA uses a local search strategy, which requires more internal iterations. The numbers of optimal solutions found by DMRSA and DBSA algorithms are both 17. The average PB values of DMRSA and DBSA algorithms are 0.37 and 0.06, respectively. The average PA values of DMRSA and DBSA algorithms are 0.65 and 0.22, respectively. When the city size is less than 200, the performance of DBSA and DMRSA is relatively close. When the city size is greater than 200, the performance of DBSA is significantly better than DMRSA. Table 8 compares the running time of DMRSA and DBSA on 10 TSP instances. As can be seen from the table, the average running time of the DMRSA algorithm is 50 times that of the DBSA algorithm. Therefore, the performance of DBSA algorithm is superior to DMRSA algorithm.


NoInstanceOptDMRSADBSA
BestAvgPBPABestAvgPBPA

1eil51426426426.480.00 0.11426426.550.00 0.13
2berlin527542754275420.00 0.00754275420.00 0.00
3eil76538538540.360.00 0.44538538.150.00 0.03
4rd100791079107912.530.00 0.03791079220.00 0.15
5kroA1002128221282212820.00 0.002128221286.60.00 0.02
6kroB100221412214122184.610.00 0.202214122190.20.00 0.22
7kroC100207492074920751.140.00 0.012074920759.30.00 0.05
8kroD100212942129421296.700.00 0.0121294213090.00 0.07
9kroE100220682206822114.960.00 0.212206822140.550.00 0.33
10eil101629629630.980.00 0.31629629.30.00 0.05
11lin1051437914379143790.00 0.001437914382.30.00 0.02
12bier127118282118282118435.630.00 0.13118282118353.40.00 0.06
13ch130611061106120.360.00 0.1761106125.150.00 0.25
14ch150652865286547.280.00 0.3065286552.80.00 0.38
15kroA150265242652426568.670.00 0.172652426564.60.00 0.15
16kroB150261302613026144.850.00 0.062613226148.90.01 0.07
17kroA200293682936829439.080.00 0.242936829434.80.00 0.23
18kroB200294372943829578.440.00 0.482943729468.80.00 0.11
19lin318420294204042232.040.030.484211942259.70.210.55
20rat575 677369096953.2252.012.66679568040.320.46
21rat783 880690319090.372.563.2388248839.40.200.38
22rl1323270199271640 274083.420.531.44270356270807.10.060.23
23fl1400201272034820464.311.101.682019420255.30.330.64
24d1655621286384764202.72.773.346232662507.650.320.61


instanceDMRSADBSA

att4810.740.36
eil5110.480.47
berlin5211.060.03
st7017.120.13
eil7620.180.25
pr7619.410.15
kroA10029.660.64
rd10032.170.69
eil10129.530.67
kroA200154.573.21

Table 9 gives the comparison results between DBSA and HGA algorithms. The HGA algorithm uses two local optimization strategies, so that more iterative times are actually needed. The numbers of optimal solutions found by HGA and DBSA algorithms are 9 and 30, respectively. And DBSA always found 9 optimal solutions, while HGA found 0. The average PB values of DBSA and HGA are 0.01 and 0.74, respectively. The average PA values of DBSA and HGA are 0.10 and 1.05, respectively. So the overall performance of DBSA algorithm outperforms the HGA algorithm.


No.InstanceOptHGADBSA
PBPAPBPA

1Eil514260.670.750.000.09
2Berlin5275420.030.030.000.00
3St706750.310.350.000.00
4Eil765381.181.500.000.00
5Pr761081590.000.090.000.08
6Rat9912110.680.900.000.01
7KroA100212820.020.140.000.01
8KorC100207490.010.300.000.00
9kroD100212940.000.240.000.08
10Rd10079100.000.040.000.00
11Eil1016291.782.520.000.00
12Lin105143790.030.310.000.02
13Pr10744,3030.000.090.000.15
14Pr12459,0300.000.110.000.00
15Ch13061100.010.330.000.21
16Pr136967720.010.260.000.12
17Pr144585370.000.000.000.03
18kroA150265240.000.280.000.02
19kroB15026130-0.010.790.000.05
20Ch15065280.040.450.000.15
21Pr152736820.000.110.000.10
22Rat19523231.041.420.220.36
23D19815,7800.741.160.000.06
24kroA20029,3680.000.310.000.05
25kroB20029,4370.050.500.000.04
26Tsp2253916-0.95-0.590.000.39
27Ts2251266431.181.300.000.00
28Pr226803690.080.210.000.05
29Pr264491350.030.060.000.00
30A28025793.133.770.000.00
31Pr299481912.643.250.000.10
32Lin318420291.422.020.040.45
33Rd400152815.035.650.110.35
34Pr4391072172.763.720.050.14
35Pcb442507784.134.410.070.30

5. Conclusion and Future Work

BSA algorithm is a novel metaheuristic algorithm inspired from the bird swarms and was first proposed for continuous optimization problems. In order to apply it to solve the combinatorial optimization problem such as TSP, it is necessary to use appropriate strategies to ensure the characteristics of the original algorithm and to design suitable schemes according to different combinatorial optimization problems. Based on these principles, this paper presents a novel discrete BSA with information entropy matrix. Guided by the information entropy matrix, a minus function is introduced to evaluate the difference between two solutions, and the position updating equations of birds are redesigned to update the information entropy matrix. Meanwhile, three TSP operators are introduced to produce new solutions according to the information entropy matrix. Experiment results show that these strategies are very efficient for TSP. The performance of DBSA outperforms significantly many metaheuristic algorithms in most of the cases.

In our future research, we will apply the designing principles and the analysis procedure of proposed DBSA algorithm to guide the design and implementation of other metaheuristic algorithms for other discrete optimization problems.

Data Availability

The TSP data used to support the findings of this study have been deposited in the TSPLIB repository (https://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by the project of Natural Science Foundation of Fujian Province (no. 2015J01233), the project of Fujian Provincial Department of Education (no. JAT160143; no. JAT170181), the special Fund for Scientific and Technological Innovation of Fujian Agriculture and Forestry University (no. CXZX2016026; no. CXZX2016031).

References

  1. J. B. Escario, J. F. Jimenez, and J. M. Giron-Sierra, “Ant colony extended: Experiments on the travelling salesman problem,” Expert Systems with Applications, vol. 42, no. 1, pp. 390–410, 2015. View at: Publisher Site | Google Scholar
  2. H. Ismkhan, “Effective heuristics for ant colony optimization to handle large-scale problems,” Swarm and Evolutionary Computation, vol. 32, pp. 140–149, 2017. View at: Publisher Site | Google Scholar
  3. Y. Zhong, J. Lin, L. Wang, and H. Zhang, “Hybrid discrete artificial bee colony algorithm with threshold acceptance criterion for traveling salesman problem,” Information Sciences, vol. 421, pp. 70–84, 2017. View at: Publisher Site | Google Scholar | MathSciNet
  4. Y. Wang, “The hybrid genetic algorithm with two local optimization strategies for traveling salesman problem,” Computers & Industrial Engineering, vol. 70, no. 1, pp. 124–133, 2014. View at: Publisher Site | Google Scholar
  5. M. A. H. Akhand, S. Akter, M. A. Rashid, and S. B. Yaakob, “Velocity tentative PSO: An optimal velocity implementation based particle swarm optimization to solve traveling salesman problem,” IAENG International Journal of Computer Science (IJCS), vol. 42, no. 3, pp. 1–12, 2015. View at: Google Scholar
  6. A. Ouaarab, B. Ahiod, and X.-S. Yang, “Discrete cuckoo search algorithm for the travelling salesman problem,” Neural Computing and Applications, vol. 24, no. 7-8, pp. 1659–1669, 2014. View at: Publisher Site | Google Scholar
  7. Y. Zhou, X. Ouyang, and J. Xie, “A discrete cuckoo search algorithm for travelling salesman problem,” International Journal of Collaborative Intelligence, vol. 1, no. 1, p. 68, 2014. View at: Publisher Site | Google Scholar
  8. E. Osaba, X.-S. Yang, F. Diaz, P. Lopez-Garcia, and R. Carballedo, “An improved discrete bat algorithm for symmetric and asymmetric Traveling Salesman Problems,” Engineering Applications of Artificial Intelligence, vol. 48, pp. 59–71, 2016. View at: Publisher Site | Google Scholar
  9. Y. Saji and M. E. Riffi, “A novel discrete bat algorithm for solving the travelling salesman problem,” Neural Computing and Applications, vol. 27, no. 7, pp. 1853–1866, 2016. View at: Publisher Site | Google Scholar
  10. M. Saraei, R. Analouei, and P. Mansouri, “Solving of travelling salesman problem using firefly algorithm with greedy approach,” Cumhuriyet Science Journal, vol. 36, no. 6, pp. 267–273, 2015. View at: Google Scholar
  11. Y. Zhou, Q. Luo, H. Chen, A. He, and J. Wu, “A discrete invasive weed optimization algorithm for solving traveling salesman problem,” Neurocomputing, vol. 151, no. 3, pp. 1227–1236, 2015. View at: Publisher Site | Google Scholar
  12. L. T. Kóczy, P. Földesi, B. Tüű-Szabó et al., “Enhanced discrete bacterial memetic evolutionary algorithm---an efficacious metaheuristic for the traveling salesman optimization,” Information Sciences, vol. 460/461, pp. 389–400, 2018. View at: Publisher Site | Google Scholar | MathSciNet
  13. H. Zhang and J. Zhou, “Dynamic multiscale region search algorithm using vitality selection for traveling salesman problem,” Expert Systems with Applications, vol. 60, pp. 81–95, 2016. View at: Publisher Site | Google Scholar
  14. J. Ouenniche, P. K. Ramaswamy, and M. Gendreau, “A dual local search framework for combinatorial optimization problems with TSP application,” Journal of the Operational Research Society, vol. 68, no. 11, pp. 1377–1398, 2017. View at: Publisher Site | Google Scholar
  15. Z. Xu, Y. Wang, S. Li, Y. Liu, Y. Todo, and S. Gao, “Immune algorithm combined with estimation of distribution for traveling salesman problem,” IEEJ Transactions on Electrical and Electronic Engineering, vol. 11, pp. S142–S154, 2016. View at: Google Scholar
  16. X. Geng, Z. Chen, W. Yang, D. Shi, and K. Zhao, “Solving the traveling salesman problem based on an adaptive simulated annealing algorithm with greedy search,” Applied Soft Computing, vol. 11, no. 4, pp. 3680–3689, 2011. View at: Publisher Site | Google Scholar
  17. S. Chen and C. Chien, “Solving the traveling salesman problem based on the genetic simulated annealing ant colony system with particle swarm optimization techniques,” Expert Systems with Applications, vol. 38, no. 12, pp. 14439–14450, 2011. View at: Publisher Site | Google Scholar
  18. W. Deng, R. Chen, B. He, Y. Liu, L. Yin, and J. Guo, “A novel two-stage hybrid swarm intelligence optimization algorithm and application,” Soft Computing, vol. 16, no. 10, pp. 1707–1722, 2012. View at: Publisher Site | Google Scholar
  19. M. Mahi, Ö. K. Baykan, and H. Kodaz, “A new hybrid method based on particle swarm optimization, ant colony optimization and 3-Opt algorithms for traveling salesman problem,” Applied Soft Computing, vol. 30, pp. 484–490, 2015. View at: Publisher Site | Google Scholar
  20. Y. Zhong, J. Lin, L. Wang, and H. Zhang, “Discrete comprehensive learning particle swarm optimization algorithm with Metropolis acceptance criterion for traveling salesman problem,” Swarm and Evolutionary Computation, vol. 42, pp. 77–88, 2018. View at: Google Scholar
  21. X. B. Meng, X. Z. Gao, L. Lu, Y. Liu, and H. Zhang, “A new bio-inspired optimisation algorithm: Bird Swarm Algorithm,” Journal of Experimental & Theoretical Artificial Intelligence, vol. 28, no. 4, pp. 673–687, 2016. View at: Google Scholar
  22. M. Parashar, S. Rajput, H. M. Dubey, and M. Pandit, “Optimization of benchmark functions using a nature inspired bird swarm algorithm,” in Proceedings of the 2017 3rd International Conference on Computational Intelligence & Communication Technology (CICT), pp. 1–7, Ghaziabad, India, Feburary 2017. View at: Publisher Site | Google Scholar
  23. X. Wang, Y. Deng, and H. Duan, “Edge-based target detection for unmanned aerial vehicles using competitive Bird Swarm Algorithm,” Aerospace Science and Technology, vol. 78, pp. 708–720, 2018. View at: Publisher Site | Google Scholar
  24. C. Zeng, C. Peng, K. Wang, Y. Zhang, and M. Zhang, “Multi-objective operation optimization of micro grid based on bird swarm algorithm,” Power System Protection Control, vol. 44, no. 13, pp. 117–122, 2016. View at: Google Scholar
  25. C. Jian, M. Li, and X. Kuang, “Edge cloud computing service composition based on modified bird swarm optimization in the internet of things,” Cluster Computing, vol. 12, pp. 1–9, 2018. View at: Publisher Site | Google Scholar
  26. M. Ahmad, N. Javaid, I. A. Niaz, S. Shafiq, O. U. Rehman, and H. M. Hussain, “Application of Bird Swarm Algorithm for Solution of Optimal Power Flow Problems,” in Complex, Intelligent, and Software Intensive Systems, vol. 772 of Advances in Intelligent Systems and Computing, pp. 280–291, Springer International Publishing, Cham, 2019. View at: Publisher Site | Google Scholar
  27. C. Xu and R. Yang, “Parameter estimation for chaotic systems using improved bird swarm algorithm,” Modern Physics Letters B. Condensed Matter Physics, Statistical Physics, Applied Physics, vol. 31, no. 36, 15 pages, 2017. View at: Publisher Site | Google Scholar | MathSciNet
  28. L. Zhang, Q. Bao, W. Fan, K. Cui, H. Xu, and Y. Du, “An Improved Particle Filter Based on Bird Swarm Algorithm,” in Proceedings of the 2017 10th International Symposium on Computational Intelligence and Design (ISCID), pp. 198–203, Hangzhou, December 2017. View at: Publisher Site | Google Scholar
  29. C. E. Shannon, “A mathematical theory of communication,” Bell Labs Technical Journal, vol. 27, pp. 379–423, 1948. View at: Publisher Site | Google Scholar | MathSciNet

Copyright © 2018 Min Lin et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views2002
Downloads772
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.