Research Article  Open Access
A Water FlowLike Algorithm for the Travelling Salesman Problem
Abstract
The water flowlike algorithm (WFA) is a relatively new metaheuristic that performs well on the object grouping problem encountered in combinatorial optimization. This paper presents a WFA for solving the travelling salesman problem (TSP) as a graphbased problem. The performance of the WFA on the TSP is evaluated using 23 TSP benchmark datasets and by comparing it with previous algorithms. The experimental results show that the proposed WFA found better solutions in terms of the average solution and the percentage deviation of the average solution from the bestknown solution.
1. Introduction
The travelling salesman problem (TSP) is a classic combinatorial optimization problem (COP), which was introduced many years ago in 1985 [1]. The TSP searches for the shortest path among a set of cities with known distances between pairs of cities to find the route solution. The route solution can be articulated as a complete graph with a set of vertices, which is a set of edges weighted by the distance between two vertices (cities), to find the shortest route by visiting each city exactly once and returning to the original city.
Numerous approaches have been proposed and have obtained good solutions. However, they vary in terms of complexity and efficiency and in being able to solve the TSP at various levels of complexity and size (small, medium, and large). Earlier studies use linear programming Dantzig et al. [2], dynamic formulation Held and Karp [3], branch and bound [4], and branch and cut [1], but their ability is limited to small problems (less than 40 cities). Later, artificial intelligent approaches were proved to have the ability to solve more complex problems; one of these approaches, the selforganized neural network [5–7], was later expanded as a metaheuristic. A metaheuristic can optimize a complex problem by searching through many candidate solutions with few or no assumptions about the problem being solved and without any guarantee of finding the optimal solution. Some metaheuristics use either a single solutionbased approach (e.g. tabu search (TS) and simulated annealing (SA)) or a populationbased approach (e.g. genetic algorithm (GA)) [8–10], while others use swarm intelligence (e.g. ant colony optimization (ACO)) [11], and recently, hybrid metaheuristics have been proposed [7, 12]. The results show that hybrid metaheuristics can produce the best result using 23 benchmark TSP datasets.
Recently, a new metaheuristic algorithm known as the water flowlike algorithm (WFA) has been proposed [13]. The algorithm is inspired by water flowing from higher to lower altitudes. The flow can split into subflows when it traverses rigid terrains and these subflows merge when they meet at the same location. Flows stagnate in lower altitude locations if their momentum cannot expel water from the current location. The flow represents a solution agent, the flow altitude represents the objective function, and the solution space for a problem is represented by a geographical terrain.
Previous metaheuristic algorithms were designed to search the problem space with a fixed number of solution agents [8, 11, 14]; however, some approaches have been proposed to obtain an appropriate setting of the algorithm population number. These approaches are categorized as either offline or online population size tuning approaches. In offline population size tuning, the aim is to find an appropriate population number before the algorithm starts the optimization process and the tuning processes, which are performed by trial and error by a human agent, which makes them time consuming and error prone and usually leads to uneven tuning of the algorithm [15]. On the other hand, online population size tuning is much more effective because, as the name implies, it tunes the population size online and it does this during the optimization process or while it is solving problem instances. Online population size tuning has a potential advantage in that it could enable the algorithm to adapt better to a particular problem instance’s characteristics.
Nevertheless, using online tuning to control the population size of an algorithm remains a challenge [15–19], Gen and Cheng [20]. Many research studies have focused on finding methods to enable populationbased metaheuristics to adjust the population size online while solving a problem instance, taking into account the impact of the instance characteristics and fitness landscape. However, these methods rely on analysing the accumulated information that has been gathered during the optimization process, which may relate to the global properties of the fitness landscape such as ruggedness or noise of the landscape, or may relate to the local properties of a specific landscape region. The gathering and analysing of this information add more complexity to an algorithm as well as more computation time. In addition, most of the online parameter tuning methods have been developed to handle the limitations of existing metaheuristics by extending or modifying the algorithmic framework, which is difficult to achieve in most situations, and success has been limited to those algorithms that are designed to search the problem space with a fixed population size.
The WFA [13] uses the concept of the dynamic population as a fundamental framework for algorithm design. The concept can be applied to overcome many of the drawbacks of populationbased metaheuristics, and, to do so, the authors addressed two main issues that affect the efficiency of algorithm optimization. The first is the need to reduce the number of redundant searches, which increases the computational cost to the algorithm during the optimization process. A redundant search occurs when combining population solutions that share the same objective value. The second issue involves giving the algorithm the ability to adapt to different population sizes during the optimization process. The size of the population in the ACO and GA algorithms is assigned in the initial stage when executing the algorithm and it cannot be changed during the optimization process.
The WFA has been successfully adapted and applied to different COPs, including binpacking [13], manufacturing cell fraction [21], and nurse scheduling [22] problems. The results of these studies show that the WFA has good potential for solving several COPs. Therefore, this paper aims to investigate the performance of the WFA when applied to a TSP in terms of accuracy and time. The past research also shows that WFA performed faster solution as it presented a dynamic solution behaviour. The remainder of this paper is organized as follows. Section 2 discusses the literature on the WFA. Section 3 presents the proposed WFA for solving the TSP, while Section 4 presents the experiments and an analysis of the results, followed by the discussion in Section 5. Lastly, Section 6 concludes the paper.
2. Related Work
The WFA [13] is categorized as a populationbased metaheuristic algorithm and is inspired by the natural behaviour of water flowing from higher to lower altitudes. The water flows can split or merge according to the topography of the search space. The main advantages of the WFA are that it is selfadaptive and dynamic in addressing population size. In other words, the solution agent size is not fixed, unlike in traditional populationbased metaheuristics. The flow number can increase or decrease during the optimization process and the population size changes based on the problem diminution and solution quality found by the agents. Yang and Wang [13] describe and map the dynamic size of the population based on the natural behaviour of water flows as they split, move, and merge.
The first version of the WFA was developed by Yang and Wang [13] to solve an object grouping problem called the binpacking problem (BPP), which is a discrete optimization problem and well known as an NPhard problem. The BPP with its tight capacity constraint requires several heuristic methods to derive feasible and optimal solutions. The traditional BPP is the problem of minimizing the number of bins used considering its weight capacity constraint. The authors use the BPP as a benchmark to measure the feasibility of the WFA for solving such optimization problems. Their proposed algorithm mainly relies on solution neighbour searches, where a onestep movement strategy is employed to find a solution neighbour with a constant step moving forward for each flow. The flow movements (location changes) are influenced by gravitational force and the law of energy conservation. Iteration by iteration, water constantly moves to lower altitudes, which correlates with improvements in the solution search. The WFA begins searching within the problem space using one solution agent (flow) with an initial momentum. Subsequently, the flow splits into multiple subflows when the flow encounters rugged terrains and if the flow momentum exceeds the splitting amount. A flow with more momentum generates more subflow streams than one with less momentum. A flow with limited momentum yields to the landform and maintains a single flow. Many flows merge into one flow when they obtain the same objective values. To avoid redundant searches, the WFA reduces the number of solution agents when multiple agents move to the same location. Water flows are also subject to water evaporation in the atmosphere. The evaporated water returns to the ground as rain (precipitation). In the WFA, part of the water flow is removed to mimic water evaporation. This precipitation operation is implemented in the WFA to simulate natural rainfall and explore a wider area.
In [13], the performance of the WFA is compared with the GA, particle swarm optimization (PSO), and ACO. The experimental results showed that the WFA outperforms the GA, PSO, and ACO in terms of both quality and execution time. Based on the experimental results, the authors concluded that the WFA may have the ability to solve complex optimization problems and suggested that it could be used to solve sequencing problems such as the TSP.
In 2010, the WFA was improved to solve the manufacturing cell fraction problem [21]. The model utilizes the similarity coefficient and machine assignment methods, as well as part assignments, to generate an initial feasible solution in the first stage and then, in the second stage, a flow splitting and moving step is employed to improve the solution using a neighbour search to obtain a nearoptimal solution. The results showed that the WFA outperforms the hybrid genetic algorithm (HGA) and SA. Shahrezaei et al. [22] used the WFA to solve the nurse scheduling problem, which is a multiobjective optimization problem. The authors compared the WFA with the differential evaluation algorithm (DE), and the results showed that a better solution quality could be achieved by using the WFA.
The strength of the WFA for solving the TSP lies in a specific feature of the WFA, namely, the dynamic behaviour of the population size. The WFA uses population solutions, which are mapped to water flows, and the objective functions are mapped as terrains. Flow splitting occurs when rugged terrains are traversed. Conversely, water flows merge with each other when they join at the same point. The proposed WFA for the TSP (WFATSP) is based on the basic WFA used by [13].
3. Proposed Water FlowLike Algorithm for the TSP
This section presents the proposed WFA for the TSP. The proposed algorithm adopts the basic operations of initialization, flow splitting and moving, flow merging, water evaporation, and water precipitation. Figure 2 shows the flow of WFATSP is adopted from the basic TSP. It starts optimization process with initialization operation, which is not repetitive operation that is responsible for assigning the initial state of the algorithm. The initialization operation includes the parameter settings and initial solution generation. In general, solutions improvement starts after the completion of the initialization operation. The algorithm interactively executes the remaining operations such as flow splitting and moving, flow merging, water evaporation, and water precipitation until the termination condition is met. The main differences between the WFATSP and the basic WFA [13] are on the solution representation, technique used in initialisation and the flow splitting and moving procedure, and precipitation operation, which always depends on the definition of the neighbourhood structure of a given COP. The three different operations are shown as dark processes in Figure 2. The basic WFA is applied for bin packing problem where it used a twodiminution array to store a set of bins and its objects for the solution representation, while for WFATSP used solution is represented as onedimensional array with a length , where is the number of cities in the original dataset. Each cell value in the array represents the number of city selected in specific order in the solution. Figure 1 shows a sample solution representation of a TSP solution. A feasible solution is represented as a sequence of nodes, where represents the indexed city and represents the city number arranged in the sequence representing a specific tour path.
The WFATSP used similar parameter setting with the basic WFA. However, the basic WFA uses random solution construction technique for generating the initial solution, while WFATSP used nearest neighbour heuristic for generating the initial solution.
In the splitting and moving operation, the design of flow moving operation is problem dependent. Different problem types will have their own designs in the flow moving operations [13]. The basic WFA uses onestep move neighbourhood structure, while the flow moving procedure in the WFATSP is designed and proposed based on the neighbourhood structure of TSP problem, which combines two types of neighbourhood structure, namely, insertion move and 2opt neighbourhood structure in order to find the best neighbourhood solution of the current flow. More illustrations about moving procedure and the proposed neighbourhood structure are presented in the following algorithm steps description.
The flow merging operation is the operation that follows the flow splitting and moving operation. This operation has a conditional execution and it works only if there are two or more flows in a specific iteration that generates redundant solutions. Similarly, water evaporation operation, which is performed after the flow merging operation, is conditionally executed if the evaporation condition is met. Note that both the flow merging operation and the water evaporation operation are adopted in WFATSP without any modifications, as their behaviour does not depend on domain problem that is being solved by algorithm. In the water precipitation, the operation is conditionally executed when all flows pour down to zero velocity. Similar to the flow movement procedure, the enforce precipitation procedure is problem dependent. Therefore, in the WFATSP, a special solution modification procedure is designed. A stochastic modification to the pourdown flows is performed by using the insertion move procedure in order to modify all locations of the pourdown flows (see Steps 8–10).
Step 1. Construct initial solution using the nearest neighbour (NN) algorithm and initialize the WFA parameters , , and .
Step 2. Perform flow splitting for all flows if they have enough momentum.
Step 3. Perform flow moving for all nonzero velocity subflows to new locations using the insertion move procedure.
Step 4. Find the best neighbourhood solution for all subflows using the 2opt neighbour search procedure.
Step 5. Calculate the mass and velocity of all subflows.
Step 6. Merge subflows if they have the same objective value and update the mass and velocity.
Step 7. Check evaporation conditions; if yes, perform the evaporation operation for each flow.
Step 8. Check regular precipitation conditions; if yes, pour down the evaporated water by adding flows to the current flows set; then assign the initial velocity of the poureddown flows. Next, the locations of the poureddown flows are derived stochastically from the original flow locations using the insertion move procedure.
Step 9. Check enforce precipitation conditions; if yes, return and distribute the evaporated water to the current flows set. Then, reset the velocity of the current flows set to the initial velocity value. The locations of the current flows are stochastically changed using the insertion move procedure.
Step 10. After performing any kind of water precipitation, check whether the new solutions have the same objective value; if yes, perform Step 6.
Step 11. Repeat Steps 2–10 until the termination condition is met.
Step 1 (initialization). A solution structure is necessary to map the TSP solution in onedimension array of size (: the number of cities that should be visited). The TSP solutions are associated with flows; thus each water flow contains onedimension array to store the generated TSP solution. However, as previously mentioned, the WFA starts the searching process with a single flow, so this flow needs to be constructed with an initial mass and an initial velocity such that the values of and allow the initial flow to fork into at least two subflows, as recommended by [13], where the flow can split if its momentum . The location of the WFA’s onewater flow (a single solution agent) is assigned based on the initial solution value. In the WFATSP, the initial solution is generated using a greedy constructive method known as the NN heuristic, as used in [23], to generate a close approximate solution for the problem. Table 1 describes the parameters of the WFATSP.

Steps 2–4 (flow splitting and moving). The WFA is characterized by its ability to adapt to a dynamic population size when searching the solution space. As mentioned above, the WFA begins with a single flow and then several branching flows are created based on the quality of the discovered solutions. The flow splits based on its momentum; a flow with a higher momentum generates more subflows than one with less momentum. Taking into account gravitational force and the law of energy conservation, the flow moves to a new location. The locations of the split subflows are derived from the neighbouring locations of the original flow. A flow movement is a solution search from the current location to a new location.
Let be the number of water flows in the current iteration. In the water flow operation, the number of subflows that branch from , where , are determined by the flow momentum . A flow with zero momentum does not split and stays at the same location; its solution is considered a stagnant solution. Conversely, a flow can split into subflows if its momentum value exceeds a base momentum . The base momentum is thus defined to compute the number of subflows. If the value of is between zero and , , then does not split and instead moves to a new location as a single stream solution. This avoids the generation of extra subflows that may lead to unnecessary consumption of resources. Yang and Wang [13] reported that the upper limit is defined as the number of subflows that split from the original flow in each iteration. However, at any iteration, the number of subflows can be calculated based on (1) as follows: When splits into subflows , the mass of is distributed to the subflows based on their ranks. The mass of subflow , which splits from , is calculated using (2) as follows: The velocity of a subflow is calculated using the equation of energy conservation; is the velocity of subflow , which splits from, as shown: where is the gravitational acceleration and is the improvement in the objective value of solution to its neighbouring solution. represents the altitude decrease from solution to solution and can be obtained using (3). If , there is no improvement in solution and it stagnates at the local optimal with no splitting or moving. Such a stagnant flow gradually evaporates into the atmosphere, returning to the ground as precipitation. At the end of the flow splitting and moving operation, the original flow is discarded because subflows have been generated. The information regarding the current number of subflows and solution sets is then recorded.
The flow movement procedure always depends on the definition of the neighbourhood structure of a given COP, where the flows can move from one location (solution) to another location (neighbourhood solution) based on a small modification of that solution. A neighbourhood for a given solution is defined as any other solution obtained by a pairwise exchange of any two nodes in the solution. This always guarantees that any neighbourhood for a feasible solution is always a feasible solution. There are many neighbourhood search strategies in the literature that have been developed especially for the TSP. These strategies evaluate the cost of moving to one or more cities to find better neighbourhood solutions.
In the WFATSP, the splitting and moving of any flow is associated with a neighbourhood search of the current solution. The subflow location is derived from the neighbourhood location of the original flow. However, the number of subflows from is calculated using the momentum of , and, for each subflow that splits from , the location of is assigned by the objective value of the new neighbour solution obtained using a random insertion move of one city. If the location of all subflows is known, the second stage intensely searches for the neighbour solution of the new assigned locations by using the 2opt neighbourhood search procedure [24] until the best neighbour of the new location is found. The main idea of the 2opt procedure is to cut two edges of the tour, turn one of the two partial sequences, and reconnect those edges in some way.
Figure 3 illustrates the splitting and moving operation used in Step 4 to search for the neighbourhood solutions for the TSP. As shown in the figure, splits into a number of subflows and this number is calculated based on the momentum of the initial flow; for example, let equal ; then the insertion move procedure is used to determine the new locations for the subflows. When the locations of are obtained, Step 4 performs the 2opt procedure to search for the best neighbourhood solution for the neighbourhood solutions of . This step is repeated until the algorithm stops searching based on a predefined termination condition. The 2opt algorithm in [24] is used in this work. Iteration by iteration, the newly generated subflows may merge with other subflows sharing the same location thereby creating a single flow. However, during the optimization process, additional subflows are likely to be generated at later iterations or they may become stuck in the current location until the stopping criteria of the algorithm are met. The basic TSP just used insertion move for every iteration.
Step 6 (flow merging operation). When two or more flows meet at the same location, they merge into a single flow with greater mass and momentum to reduce the number of solution agents at that location. The merging operation prevents redundant searching by solution agents with the same objective value. Flow merging may also help stop stagnated flows from becoming trapped in certain locations. This operation regularly checks the current flows to see if they share the same location. Assuming that flows and share the same location, , then mass and velocity are updated using the following, respectively:
Step 7 (water evaporation operation). As a neutral behaviour, water evaporates and returns to the ground as precipitation. The WFA uses the concept of water evaporation and precipitation after moving from one location to another to help flows escape from local optima and search within a greater solution space. In this operation, the flow mass is updated using the following: where is the mass at the time when was initially generated or when it merged with another flow. Each water flow is subject to water evaporation into the atmosphere based on a fixed ratio , as in (6). If a splitting or merging operation does not update a flow, the flow is removed after iterations.
Steps 8–10 (water precipitation operation). Two types of precipitation are performed in the WFA to simulate the natural lifecycle of water [13]. The first type is enforced precipitation, which is used to revive all grounded flows. The second type is regular precipitation, which is executed one time for each t iteration to get all the evaporated water to rain down to the ground again.
Enforced Precipitation. When the velocity of all flows is at zero, this means there is no improvement in the solution search and all flows are stuck in local optima. In this situation, a process to revive the flows is necessary to stochastically deviate the locations (generate new solutions) of the flows from the original locations (original solutions) and reset the flows’ mass and velocity. All the masses of the poured flows should be updated without changing the current flow numbers, and an initial velocity should be assigned to them. Mass is proportionally distributed to all flows according to their original mass using the following: The main reason for applying enforced precipitation is to increase the exploration of the solution space and prevent flows from stopping the searching process. Stochastic relocation of the current flows means a stochastic modification of the solution that is associated with each flow. Similar to the flow movement procedure, the enforce precipitation procedure needs to be customized based on the structure of the optimization problem solution. Therefore, in the WFATSP, a special solution modification procedure is designed. In the following, a complete illustration of the enforce precipitation procedure is provided.
As mentioned, the relocation of flow locations is carried out by stochastic modification of the solutions associated with the flows. A certain number of cities in each solution should be selected for modification. The modification of the selected cities is carried out by performing the insertion move procedure, where each selected city moves to a new, randomly selected position in the solution. However, in the proposed method, the criterion for the selection of the potential cities to be modified in a is obtained stochastically from the original coordinate of that flow. For example, if the new location of is each is obtained stochastically from the original coordinate by using (8) as in the following: Here, , where is the coordination offset. is the runtime random generation value, which ranges from 0 to 1. By using the above formula, it can be guaranteed that the deviated locations of the flows are either bounded within the allowed relocation range or remain at the same location.
Regular Precipitation. This type of precipitation is performed periodically to return the evaporated water back to the ground. In each iteration, the operation executes once to get the evaporated water to pour down. The accumulated mass of evaporated water is , which is reassigned to ground flows, as in (9). However, the revived flows may generate the same objective values and location assignments as those of other flows. Therefore, a merging operation is performed after precipitation is applied to remove possible redundant solutions:
This operation can be regarded as a wide exploration of the solution space. Here, as suggested by [13], a number of flows are poured down again to the ground and added to the current flows set to represent the water rainfall operation. The number of poureddown flows is considered as equal to the number of current flows , which thus duplicates the number of flows in order to increase the solution space exploration ability of the algorithm. However, if is the set of newly generated flows, similar to the enforce precipitation deviation, the location of is driven stochastically, but from the current flows set using the insertion move procedure.
4. Experiments and Results
The performance of the proposed WFATSP is evaluated by conducting several experiments using standard benchmark TSP datasets available from TSPLIB [25]. The experiments were performed on 23 city datasets, ranging from 51 to 3795 cities. The WFATSP was implemented using Java platform JDK 1.6, a Windows environment and a personal computer with an Intel core i5 processor (3.00 GHz CPU speed and 4 GB RAM). The experiments measure the solution cost and computation time, which are obtained from 10 runs for each dataset, with 10,000 iterations for each independent run. This number of iterations is required to reach the best solution. The minimum, average, and standard deviation (Std.) of the solution cost for 10 independent runs are calculated. The distances between any two cities are calculated by using the Euclidian distance and are rounded off after the decimal point. The average computational cost is also determined. The results are compared with ant colony system (ACS) [11] and other algorithms in the literature. Table 2 shows the parameter settings for the tested algorithms. The parameter setting of the WFATSP follows the same parameter settings in [9]. Preexperiment shows that these parameter settings obtained the best result.

4.1. The Performance of WFATSP Compared to ACS
This section provides the comparison results between ACS and WFATSP in datasets that include problems with the number of city which ranged from 51 to 3795 cities, which represent small, medium, and large size datasets. Table 3 presents comparison results in terms of the best solution quality, average iterations number, and computation time of the algorithms tested. The table also shows the comparison between ACS and WFATSP in terms of the solution accuracy (in percentage) and the solution deviation of the mean values regarding the bestknown solution. The values are provided for the computation time and solution quality to see if there is any significant difference between the ACS and WFATSP algorithms. The discussion in the section is divided into two main parts: (i) best performance and (ii) solution accuracy. Note that the best results are presented in bold in Table 3.
 
Note: minimum (Min), maximum (Max), standard deviation (std.), the percentage deviations of the average with the bestknown solution (PD_{avg}), ~ denotes to 0.000, accuracy % = ((mean_{ACS} − mean_{WFAtsp})/mean_{ACS}) * 100. 
4.1.1. The Best Performance
Table 3 compares the WFATSP performance with ACS algorithm performance using the best solution found and average iterations and time collapsed to find the best solution for 10 independent runs. The results presented in the table show that WFATSP outperforms the ACS algorithm in all datasets in terms of the best solution found. Furthermore, WFATSP can reach the optimal solution in 9 out of 18 datasets. These datasets are “eil51,” “eil76,” “kroA100,” “eil101,” “bier127,” “ch130,” “ch150,” “kroA150,” and “kroA200,” whereas ACS algorithm reached the optimal solution for datasets “eil76” and “kroA100.” Table 3 and Figure 4 indicate how the average of the computational time of both algorithms changed when the number of cities is gradually increased. It can be seen from the figure that the computation time of the ACS and WFATSP is relatively closed. WFATSP performed slightly faster than ACS in small size cities. However, WFATSP significantly outperformed ACS in medium size cities. Additionally, it can also be seen from Table 3 and Figure 5 that the average computation time of ACS in the 10 runs clearly increases with the large dataset size until it reaches up to 6204 seconds. Meanwhile WFATSP has shown much faster solution search and its average computation time reaches less than 5000 seconds with large datasets.
The table also shows the percentage improvement in the computation time of the WFATSP when the average time of the two algorithms is compared. WFATSP shows that in all datasets the improvement percentage can reach up to 80%. Table 3 also demonstrates the significant of the statistical results of all datasets. It can be seen that WFATSP has a significant improvement in terms of the computation time where the values for all datasets are less than 0.05. By this, a conclusion can be drawn about the scalability of WFATSP in small, medium, and large dataset size, where WFATSP outperforms ACS in both solution quality and computation time.
4.1.2. Solution Accuracy
Table 3 shows the statistical descriptive test used on the experimental results of the algorithms tested. The statistical test was used to evaluate the search accuracy of the proposed method compared to the ACS algorithm. The solution quality results reported in Table 3 show that WFATSP algorithm has shown better searching performance than ACS, while the mean and the standard deviation values obtained by WFATSP algorithm on all datasets are less and better than the ACS algorithm, indicating that WFATSP has more stability in its searching performance than ACS. The table also outlines the improvement percentage in the searching accuracy of the proposed method by comparing the mean of the two algorithms. The difference in the searching accuracy obtained by WFATSP is clearly significant. WFATSP shows in all the datasets a better improvement percentage, which ranges from 0% to 4%. In addition, the table shows that the improvement percentage is increased when the number of cities increased, which indicates that WFATSP algorithm can search solutions far more accurately than ACS.
Table 3 also shows the summary of the percentage deviation of the experimental results obtained by WFATSP and ACS compared to the bestknown results. The table compares the experimental results using the percentage deviation of the average with the bestknown solution (PDavg) of both algorithms. From the results shown in the table, the deviation of the average solution from the bestknown solution of the WFATSP for all datasets is significantly better than ACS. From these comparisons, it can be observed that WFATSP is generally better than ACS in terms of the solution accuracy. Additionally, based on the reported values in Table 3, it can be seen that WFATSP has a significant improvement in terms of the solution quality where the values for all datasets are less than 0.05. Generally, the findings have revealed that WFATSP has faster computation time and can find better quality of solutions than ACS algorithm in small and medium size dataset problems.
4.2. Performance of WFATSP versus Other State of the Art Metaheuristics in TSP
The results of the experiments are presented in Table 3 in which the performance of the WFATSP is compared with that of the algorithms proposed by Somhom et al. [5], Pasti and De Castro [6], Masutti and de Castro [7], and the genetic simulated annealing ant colony system with particle swarm optimization (GSAACSPSO) technique proposed by [12], by considering the best solution found and the average solutions from 10 independent runs. In the table, BKS represents the bestknown solution, mean represents the average, SD represents the standard deviation, and best represents the best solution found, which is highlighted in bold.
It can be seen that the results of the WFATSP are generally better than those of Somhom et al. [5], Pasti and de Castro [6], Masutti and de Castro [7], and GSAACSPSOT [12]. As the table shows, the WFATSP outperformed the GSAACSPSO in 18 out of 23 datasets in terms of the best solution found. These datasets are eil51, eil76, eil101, berlin52, bier127, ch130, ch150, rd100, lin105, kroA100, kroA150, kroA200, kroB100, kroC100, kroD100, kroE100, lin318, and fl1400. The WFATSP also outperformed the GSAACSPSO in nearly all datasets in terms of the average solutions, except for datasets rat575 and rat783. It also can be seen that the algorithms of Somhom et al. [5], Pasti and de Castro [6], and Masutti and de Castro [7] could not reach the optimal solution for the tested datasets, whereas the WFATSP reached the optimal solution for 13 of the 23 datasets. The analysis therefore shows that the WFATSP performs consistently well compared with other algorithms. Also, the SD of the best solution cost for the WFATSP in 10 runs was lower than that of the other algorithms.
Table 4 presents a comparison of the experimental results by using the percentage deviations of the average and best solutions from the bestknown solution of the WFATSP, Somhom et al. [5], Pasti and de Castro [6], Masutti and de Castro [7], and GSAACSPSO [12]. The percentage deviation of the average solution from the bestknown solution is denoted as PDavg while the calculated deviation of the best solution from the bestknown solution is denoted as PDbest. The table shows that the deviation of the average solution from the bestknown solution (PDavg) of the WFATSP for all datasets except rat575 and rat783 was significantly better than that of the other algorithms. The difference in the performance of WFATSP in each dataset compared with that of the GSAACSPSOT in terms of the PDavg value can be observed clearly in the graph (Figure 6), where the WFATSP PDavg values are compared with the PDavg values of the other algorithms. Table 5 also shows that in 19 of the 23 datasets, the deviation of the best solution from the bestknown solution (PDbest) of the WFATSP is generally better than that of the other algorithms for datasets eil51, eil76, eil101, berlin52, bier127, ch130, ch150, rd100, lin105, kroA100, kroA150, kroA200, kroB100, kroC100, kroD100, kroE100, lin318, and fl1400.


5. Discussion
Finally, based on the experimental results for the different size of datasets of the test problems, the following discussion can be presented.
Searching Behaviour. The aforementioned performance of WFATSP represents how the concept of dynamic population in WFA influences the solution search behavior, where changing flows number using flow splitting and merging helps WFA to assign an appropriate population size along the optimization process. The dynamic behaviour of WFA depends on the problem landscape characteristics of the problem instance that is being solved. Therefore, from the experimental results, it can be noted that the complexity WFA is smoothly increased compared to ACS when the number of cities increased. Nevertheless, Figure 7 shows a general WFATSP searching behaviour and the performance of the algorithm in terms of exploration and exploitation. The use of water evaporation and precipitation operations has improved the algorithm performance in finding good solution in less amount of time. Using water evaporation and precipitation operations prevents WFATSP from early convergence and avoids the algorithm of being trapped in local optima.
The black colour arrow points to the local best values area, which represents the highest objective values. The solution exploration areas clearly appeared several times during the optimization process, which indicates that the exploration process can occur several times too. On the other hand, the green arrow marks a sample of solution exploitation area. It can be seen in this area that the objective values are coupled with lowest values, which indicates that the exploitation process is performed in this region. However, similar to the solution exploration, solution exploitation appears several times in the graph. By this, a good balancing between exploration and exploitation process is clearly seen in the graph, where the exploration and exploitation area somehow appear equally.
Scalability. Both algorithms are scalable in small and medium size datasets. The computation time of ACS and WFATSP increases linearly when the number of cites increases. They do not change significantly when the number of cities increases until the number of cities reaches up to 318 cities. Figure 8 clearly shows log graph of dataset “lin318” that shows the convergence speed of WFA in reaching the optimal solution is faster compared to ACS. Furthermore, the differences of the convergence speed are increased when the number of cities higher than 575 cities. It can be seen clearly in Figure 3 and for more large datasets in Figure 4. In fact, although ACS is faster in small and medium size datasets, WFATSP improves the search speed to reach up to 80% compared to ACS. However, in large size datasets, the scalability of ACS is poor. Its computation time increases, as the problem becomes larger and more complex, whereas WFATSP has shown to be a far more scalable method than ACS for solving large size problems. Its improvement percentage in large size datasets can reach up to 38% compared to ACS algorithm.
Performance. WFATSP is more efficient compared to ACS. WFATSP can find optimal solution in 9 out of 19 datasets, whereas ACS can just obtain the optimal solution in two datasets. In small and medium size datasets, ACS and WFATSP are able to perform well until the number of cities increases up to 318 cities, where the percentage deviation of the best and the average solution quality to the bestknown solutions are less than 3.9% and 2.6% for ACS and 1.1% and 0.6% for WFATSP, respectively. However, when the number of cities increases up to 575 cities, the percentage deviation of the best and average solution quality to the bestknown solutions of ACS has increased to reach up to 7.6% and 6.2%. In general, WFATSP performs better than ACS, where the percentage deviation is lower and the improvement of the searching accuracy of WFATSP can reach up to 4%. However, in the large size datasets, the situation is much different between the performance of ACS and WFATSP. WFATSP is significantly outperforming ACS in terms of solution quality, where the percentage deviation of the best and the average solution quality to the bestknown solutions reached up to 23.2% and 22.7% for ACS and 4.4% and 4.3% for WFATSP, respectively. Generally, WFATSP performs better than ACS in large size datasets, where the percentage deviation is much lower and the improvement of the searching accuracy of WFATSP can reach up to 14.7%. This is because of the fast convergence of WFATSP towards the good solution area due the dynamic behaviour in the population size of WFA, especially when the solution agent of water flow can split or fork intensively in more promising solution regions. The dynamic behaviour in the population size of WFA can be seen clearly in Figure 9. The figure shows that the number of population size is changing during the optimization process when solving the “d1665” TSP dataset. The increase of the flows number indicates that the algorithm has found a promising solution area, whereas the decreasing in the flows number highlights the redundancy in the search for solutions.
6. Conclusion
This paper has presented the WFA algorithm for TSP, which differs from the basic TSP algorithm due to domain of TSP versus group object problem. The WFATSP uses the NN heuristic for initialization, applies the onestep insertion move with 2opt neighbour search strategies for splitting and moving, has different representation solution changing in the precipitation according the problem domain of TSP. The experimental results show that the WFATSP has outperformed ACO in terms of the best solution and computation time for all data sets. The WFATSP also presents the best performance algorithm when compared with the recent metaheuristic algorithm. This study has demonstrated that the WFA is suitable to use for obtaining a good solution to TSP. The strength of WFATSP is the fact that it has shown as a fast computation time, as its very potential is used for solution problem which concerns computation time such as web service composition or real time solution. Since the WFA consists of several components that influence the performance of the algorithm, there are many potential improvements that could be made to the WFATSP; particularly the water flow splitting and moving procedure could be improved by using better neighbour search strategies. Past researches in TSP have shown various other neighbour search strategies such as 3opt, 4opt, simulated annealing, and tabu search used to improve their metaheuristic algorithms for TSP.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
References
 E. L. Lawler, The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization, WileyInterscience Series in Discrete Mathematics, Wiley, 1985.
 G. Dantzig, R. Fulkerson, and S. Johnson, “Solution of a largescale travelingsalesman problem,” Operational Research Society Journal, vol. 2, pp. 393–410, 1954. View at: Google Scholar  MathSciNet
 M. Held and R. M. Karp, “A dynamic programming approach to sequencing problems,” vol. 10, pp. 196–210, 1962. View at: Google Scholar  Zentralblatt MATH  MathSciNet
 E. Balas and N. Christofides, “A restricted Lagrangian approach to the traveling salesman problem,” Mathematical Programming, vol. 21, no. 1, pp. 19–46, 1981. View at: Publisher Site  Google Scholar  MathSciNet
 S. Somhom, A. Modares, and T. Enkawa, “A selforganising model for the travelling salesman problem,” Journal of the Operational Research Society, vol. 48, no. 9, pp. 919–928, 1997. View at: Publisher Site  Google Scholar
 R. Pasti and L. N. De Castro, “A neuroimmune network for solving the traveling salesman problem,” in International Joint Conference on Neural Networks (IJCNN '06), pp. 3760–3766, July 2006. View at: Google Scholar
 T. A. S. Masutti and L. N. de Castro, “A selforganizing neural network using ideas from the immune system to solve the traveling salesman problem,” Information Sciences, vol. 179, no. 10, pp. 1454–1468, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 M. GorgesSchleuter, “Asparagos96 and the traveling salesman problem,” in Proceedings of the IEEE International Conference on Evolutionary Computation, pp. 171–174, April 1997. View at: Google Scholar
 G. Paun, G. Rozenberg, and A. Salomaa, DNA Computing: New Computing Paradigms, Springer, Berlin, Germany, 1998.
 W. Pullan, “Adapting the genetic algorithm to the travelling salesman problem,” in Proceedings of the Congress on Evolutionary Computation (CEC '03), pp. 1029–1035, 2003. View at: Google Scholar
 M. Dorigo and L. M. Gambardella, “Ant colony system: a cooperative learning approach to the traveling salesman problem,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 53–66, 1997. View at: Publisher Site  Google Scholar
 S. Chen and C. Chien, “Solving the traveling salesman problem based on the genetic simulated annealing ant colony system with particle swarm optimization techniques,” Expert Systems with Applications, vol. 38, no. 12, pp. 14439–14450, 2011. View at: Publisher Site  Google Scholar
 F. C. Yang and Y. P. Wang, “Water flowlike algorithm for object grouping problems,” Journal of the Chinese Institute of Industrial Engineers, vol. 24, no. 6, pp. 475–488, 2007. View at: Google Scholar
 J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Australia, December 1995. View at: Google Scholar
 T. Stützle, M. LópezIbánez, P. Pellegrini et al., “Parameter adaptation in ant colony optimization,” in Autonomous Search, pp. 191–215, Springer, New York, NY, USA, 2012. View at: Google Scholar
 W. Leong and G. G. Yen, “PSObased multiobjective optimization with dynamic population size and adaptive local archives,” IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics, vol. 38, no. 5, pp. 1270–1293, 2008. View at: Publisher Site  Google Scholar
 K. C. Tan, T. H. Lee, and E. F. Khor, “Evolutionary algorithms with dynamic population size and local exploration for multiobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 5, no. 6, pp. 565–588, 2001. View at: Publisher Site  Google Scholar
 G. G. Yen and H. Lu, “Dynamic multiobjective evolutionary algorithm: adaptive cellbased rank and density estimation,” IEEE Transactions on Evolutionary Computation, vol. 7, no. 3, pp. 253–274, 2003. View at: Publisher Site  Google Scholar
 K. de Jong, “Parameter setting in EAs: a 30 year perspective,” in Parameter Setting in Evolutionary Algorithms, pp. 1–18, Springer, 2007. View at: Google Scholar
 M. Gen and R. Cheng, Genetic Algorithms and Engineering Optimization, vol. 7, WileyInterscience, New York, NY, USA, 2000.
 T. Wu, S. Chung, and C. Chang, “A water flowlike algorithm for manufacturing cell formation problems,” European Journal of Operational Research, vol. 205, no. 2, pp. 346–360, 2010. View at: Publisher Site  Google Scholar
 P. S. Shahrezaei, R. T. Moghaddam, M. Azarkish, and A. SadeghnejadBarkousaraie, “Water flowlike and differential evolution algorithms for a nurse scheduling problem,” American Journal of Scientific Research, pp. 12–32, 2011. View at: Google Scholar
 D. Kaur and M. M. Murugappan, “Performance enhancement in solving traveling salesman problem using hybrid genetic algorithm,” in Proceedings of the Annual Meeting of the North American Fuzzy Information Processing Society (NAFIPS '08), pp. 1–6, May 2008. View at: Publisher Site  Google Scholar
 S. Lin and B. W. Kernighan, “An effective heuristic algorithm for the travelingsalesman problem,” Operations Research, vol. 21, pp. 498–516, 1973. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 G. Reinelt, “TSPLIB: a traveling salesman problem library,” ORSA Journal on Computing, vol. 3, no. 4, pp. 376–384, 1991. View at: Publisher Site  Google Scholar
 E. M. Cochrane and J. E. Beasley, “The coadaptive neural network approach to the euclidean travelling salesman problem,” Neural Networks, vol. 16, no. 10, pp. 1499–1525, 2003. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2014 Ayman Srour et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.