Abstract

A simple lexisearch algorithm that uses path representation method for the asymmetric traveling salesman problem (ATSP) is proposed, along with an illustrative example, to obtain exact optimal solution to the problem. Then a data-guided lexisearch algorithm is presented. First, the cost matrix of the problem is transposed depending on the variance of rows and columns, and then the simple lexisearch algorithm is applied. It is shown that this minor preprocessing of the data before the simple lexisearch algorithm is applied improves the computational time substantially. The efficiency of our algorithms to the problem against two existing algorithms has been examined for some TSPLIB and random instances of various sizes. The results show remarkably better performance of our algorithms, especially our data-guided algorithm.

1. Introduction

The traveling salesman problem (TSP) is one of the benchmark and old problems in computer science and operations research. It can be stated as follows:

A network with nodes (cities), “node 1” being the starting node, and a cost (or distance, or time etc.,) matrix of order associated with an ordered pair of nodes () is given. The problem is to find a least cost Hamiltonian cycle. That is, the problem is to obtain a tour representing a cyclic permutation for which the total cost is minimum.

The TSP finds application in a variety of situations such as automatic drilling of printed circuit boards and threading of scan cells in a testable VLSI circuit [1], X-ray crystallography [2], and so forth. On the basis of structure of the cost matrix, the TSPs are classified into two groups—symmetric (STSP) and asymmetric (ATSP). The TSP is symmetric if , for all , and asymmetric otherwise. The TSP is a NP-complete combinatorial optimization problem [3]; and roughly speaking it means, solving instances with a large number of nodes is very difficult, if not impossible. Since ATSP instances are more complex, in many cases, ATSP instances are transformed into STSP instances and subsequently solved using STSP algorithms [4]. However, in the present study we solve the ATSP instances without transforming into STSP instances.

Since large size instances cannot easily be solved optimally by an exact algorithm, a natural question may arise that what maximum size instances can be solved by an exact algorithm. Branch and cut [5], branch and bound [6], lexisearch [7, 8] are well-known exact algorithms. In our investigation, we apply lexisearch algorithm to obtain exact optimal solution to the problem. The lexisearch algorithm has been successfully applied to many combinatorial optimization problems. Pandit and Srinivas [9] showed that lexisearch algorithm is better than the branch and bound algorithm. In lexisearch algorithm, lower bound plays a vital role in reducing search space, hence, reduces computational time. Also, preprocessing of data, before the lexisearch algorithm is applied, can reduce computational effort substantially [9, 10].

In this paper, we first present a simple lexisearch algorithm using path representation for a tour for obtaining exact optimal solution to the ATSP. Then a data-guided lexisearch algorithm is proposed by incorporating a data processing method to improve further the efficiency of the algorithm. Finally, a comparative study is carried out of our algorithms against lexisearch algorithm of Pandit and Srinivas [9], and results using integer programming formulation of Sherali et al. [11] for some TSPLIB and random instances of various sizes.

This paper is organized as follows: Section 2 presents literature review on the problem. A simple lexisearch algorithm is developed in Section 3. Section 4 presents a data-guided lexisearch algorithm for the problem. Computational experience for the algorithms has been reported in Section 5. Finally, Section 6 presents comments and concluding remarks.

2. Literature Review

The methods that provide the exact optimal solution to a problem are called exact methods. The brute-force method of exhaustive search is impractical even for moderate sized TSP instances. There are few exact methods which find exact optimal solution to the problem much more efficiently than this method.

Dantzig et al. [12] solved instances of the TSP by formulating as integer programming approach and found an optimal solution to a 42-node problem using linear programming (LP). Sarin et al. [13] proposed a tighter formulation for the ATSP that replaces subtour elimination constraints of Dantzig et al. [12] and solved five benchmark instances using branch and bound approach, however, as reported [13], it failed to provide competitive performance for the ATSP instances due to the size and structure of the LP relaxations. A class of formulations for the ATSP has been proposed by Sherali et al. [11], which is proved to be tighter than the formulation based on subtour elimination constraints of Sarin et al. [13]. Also, Öncan et al. [14] made a survey of 24 different ATSP formulations and discussed the strength of their LP relaxations and reported that the formulation by Sherali et al. [11] gave the tightest lower bounds.

Balas and Toth [15] solved the ATSP instances using a branch and bound approach with a bounding procedure based on the assignment relaxation of the problem. Currently, many good approximation algorithms based on branch and bound approach have been developed for solving ATSP instances [1618].

Pandit [19] developed a lexisearch algorithm for obtaining exact optimal solution to the ATSP by using adjacency representation for a tour. As reported, the algorithm shows large variations in the context of computational times for different instances of same size. Murthy [20] proposed another scheme for the “search” sequence of Pandit’s [19] algorithm, which was expected to increase the computational efficiency of the algorithm to considerable extent. But as reported by Srinivas [21], the proposed algorithm is likely to be efficient in case of highly skewed cost distributions and does not seem to be any better than the conventional branch and bound algorithm.

Pandit and Srinivas [9] again modified lexisearch algorithm of Pandit [19], which is found to be better than previous lexisearch algorithms and branch and bound algorithm. But, as reported by Ahmed [22], the algorithm shows large variations in computational times. It is interesting to see that randomly generated instances of same size seem to fall into two distinct groups in the context of computational time. One group requires significantly less time than the average while another takes significantly more time than the average, with a big “gap” between the two groups.

There are mainly two ways of representing salesman’s path in the context of lexisearch approach, namely, path representation and adjacency representation. In adjacency representation, permutation is generated in a systematic lexical order, but all permutations do not lead to feasible solution. Hence, a permutation is to be tested for acceptability. In path representation, explicit testing for cycle formation is avoided, and hence there is a possibility to take less computational time than the other method. In fact, Ahmed and Pandit [23] used path representation for solving the TSP with precedence constraints and found very good results. In this paper also we are using path representation method for a tour.

3. A simple Lexisearch Algorithm for the ATSP

In lexisearch approach, the set of all possible solutions to a problem is arranged in hierarchy- like words in a dictionary, such that each incomplete word represents the block of words with this incomplete word as the leader of the block. Bounds are computed for values of the objective function over these blocks of words. These are compared with “best solution value” found so far. If no word in the block can be better than the “best solution value” found so far, jump over the block to the next one. However, if the bound indicates a possibility of better solutions in the block, enter into the subblock by concatenating the present leader with appropriate letter and set a bound for the new (sub) block so obtained [7, 8, 10].

3.1. Bias Removal

The basic difference between Assignment Problem (AP) solution space and TSP solution space is that former can be viewed as the set of all permutations of elements while the latter is restricted to “indecomposable” permutations only. So, solution space of TSP is a subset of solution space of AP and hence, steps like bias removal are useful in case of TSP [24].

For calculating bias, we first calculate row minima of the cost matrix as

Then we subtract each row minima from its corresponding row elements in the matrix to obtain a new matrix as

Then we calculate column minima of the resultant cost matrix as

Next, we subtract each column minima from its corresponding column elements in the resultant matrix to obtain a modified matrix as

The bias calculation is shown in Table 1. Bias of the given matrix = row minima + column minima . The modified (resultant) cost matrix will become a nonnegative matrix with at least one zero in each row and in each column (see Table 2). In an AP, if we add or subtract a constant to every element of a row (or column) in the cost matrix, then an assignment which minimizes the total cost on one matrix also minimizes the total cost on the other matrix. Since, TSP is a particular case of AP, so, it is enough to solve the problem with respect to the modified cost matrix.

3.2. Alphabet Table

Alphabet matrix, , is a square matrix of order n formed by positions of elements of the modified cost matrix of order , . The ith row of matrix A consists of the positions of the elements in the th row of the matrix when they are arranged in the nondecreasing order of their values. If stands for the th element in the th row of A, then (i,1) corresponds to the position of smallest element in th row of the matrix [7, 10]. Alphabet table “" is the combination of elements of matrix A and their values as shown in Table 3.

3.3. Lower Bound

The objective of lower bound is to skip as many subproblems in the search procedure as possible. A subproblem is skipped if its lower bound exceeds the “best solution” found so far (i.e., upper bound) in the process. The higher the lower bound the larger the set of subproblems that are skipped. Most of methods in the literature consider AP solution as the overall lower bound for the ATSP instance and develop algorithms based on AP relaxation and subtour elimination scheme. In this study we are not setting overall lower bound for an instance, rather setting lower bound for each leader on the value of the objective function for the instance. The following method is used for setting lower bound for each leader.

Suppose the partial tour is () and “node ” is selected for concatenation. Before concatenation, we check bound for the leader (). For that, we start our computation from 2nd row of the “alphabet table” and traverse up to the nth row, and sum up the values of the first “legitimate” node (the node which is not present in the tour), including “node 1”, in each row, excluding -th and -th rows. This sum is the lower bound for the leader ().

3.4. Simple Lexisearch Algorithm (LSA)

Let be the given cost matrix. Then for all nodes in the network, generate a zero-one vector V=[vi] of order n as follows:

Though it is not a part of the algorithm, but for checking the feasibility of the tour, we follow this convention. A preliminary version of the algorithm is presented in Ahmed [22]. The algorithm is as follows.

Step 1. Remove “bias” of the cost matrix and construct the “alphabet table” based on the modified cost matrix. Set “best solution value” as large as possible. Since “node 1” is the starting node, we start our computation from 1st row of the “alphabet table”. Initialize “partial tour value” = 0, and go to Step 2.

Step 2. Go to th element of the row (say, node ). If (partial tour value + present node value) is greater than or equal to the “best solution value”, go to Step 10, else, go to Step 3.

Step 3. If “node ” forms a subtour, drop it, increment by 1 and go to Step 8; else, go to Step 4.

Step 4. If all nodes of the network are visited, add an edge connecting the “node ” to “node 1”, compute the complete tour value and go to Step 5; else go to Step 6.

Step 5. If the complete tour value is greater than or equal to the “best solution value”, go to Step 10; else, replace the tour value as the “best solution value” and go to Step 10.

Step 6. Calculate the lower bound of the present leader on the objective function value and go to Step 7.

Step 7. If (lower bound + partial tour value + present node value) is greater than or equal to the “best solution value”, drop the “node ”, increment by 1, and go to Step 8; else, accept the “node ”, compute the partial tour value, and go to Step 9.

Step 8. If is less than (total number of nodes), go to Step 2; else, go to Step 10.

Step 9. Go to subblock, that is, go to pth row, put and go to Step 2.

Step 10. Jump this block, that is, drop the present node, go back to the previous node in the tour (say, node ), that is, go to the th row of the “alphabet table” and increment by 1, where r was the index of the last “checked” node in that row. If node and , go to Step 11, else, go to Step 8.

Step 11. The “best solution value” is the optimal solution value with respect to the modified cost matrix. Add the “bias” to the optimal solution value to obtain the optimal solution value with respect to the original cost matrix, and go to Step 12.

Step 12. Current word gives the optimal tour sequence with respect to the original cost matrix, and then stop.

3.5. Illustration of the Algorithm

Working of the above algorithm is explained through a seven-node example with cost matrix given in Table 1. The logic flow of the algorithm at various stages is indicated in Table 4, which sequentially records the intermediate results, with decision taken (i.e., remarks) at these steps. The symbols used therein are listed below:GS:go to sub-block,JB: jump the current block,JO: jump out to the next, higher order block,BS: best solution value.

As illustration of the example, we initialize BS = 999 and “partial tour value (Sol)” = 0. We start from 1st row of the “alphabet table”. Here, with “present node value (Val)” = = 0 and (Sol + Val) < BS. Now we go for bound calculation for the present leader (1, 4). The bound will guide us whether the node 4 will be accepted or not

Since (Bound + Sol + Val) = 0 + 0 + 0 = 0 < BS, we accept the node 4 that leads to the partial tour with Sol = Sol + Val = 0 + 0 = 0.

Next, we go to 4th row of the “alphabet table”. Since forms to a sub-tour, we consider the next element of the row, with Val = = 0, and (Sol + Val) < BS. Now, we go for bound calculation for the present leader (1, 4, 3)

Since, (Bound + Sol + Val) = 30 + 0 + 0 = 30 < BS, we accept the node 3 that leads to the partial tour with Sol = Sol + Val = 0 + 0 = 0. Proceeding in this way we obtain the 1st complete tour as with Sol = 57 < BS, so we replace BS = 57. Now, we jump out to the next higher-order block, that is, with Sol = 0 and try to compute another complete tour with lesser tour value. Proceeding in this way, we obtain the optimal tour as with value 10. Hence, the optimal tour value with respect to the given original cost matrix = bias + tour value = 148 + 10 = 158.

Our preliminary study on randomly generated instances shows that the above algorithm also produces two groups of instances of same size in terms of computational times with a gap between the groups. In the above algorithm, the nature of the data does not play any role. However, introducing a preprocessing of the cost matrix may reduce the computational time as well as gap between two groups, and hence, we introduce preprocessing technique in next section.

4. A Data-Guided Lexisearch Algorithm for the ATSP

Introducing a preprocessing technique as done in lexisearch algorithms of Pandit and Srinivas [9] and Ahmed [10] are not worthwhile for our algorithm. Of course, exchanging row with corresponding column in the cost matrix depending on their variances would have been worthwhile, but keeping record of track of the nodes would be more expensive. It is then observed that for some instances, solving transposed cost matrix takes lesser time with a same tour value than the original matrix. Now, the question is that under what condition the transposed matrix is to be considered instead of the given matrix. After studying many statistics of the cost matrix we come to the conclusion that when the variances (standard deviations) of rows are more than those of columns, our lexisearch algorithm takes less computational time. Hence, we introduce two preprocessing of the modified cost matrix before applying our simple lexisearch algorithm, as follows.

Let and be the standard deviations of ith row and ith column of the modified cost matrix , for all i=1, 2,…, .Process 1: we count how many times . If this count is greater than , then the transposed modified cost matrix is considered. Process 2: let and . Next, we check whether (). If yes, then the transposed modified cost matrix is considered.

The above preprocessing is incorporated in data-guided lexisearch algorithm. The data-guided lexisearch algorithm replaces the Steps 1 and 12 of the algorithm described in Section 3.4 as follows.

Step 0. Remove “bias” of the given cost matrix. Preprocess the modified cost matrix as described above and construct the “alphabet table” based on the modified cost matrix. Set “best solution value” as large as possible. Since “node 1” is the starting node, we start our computation from 1st row of the “alphabet table”. Initialize “partial tour value” = 0, and go to Step 2.

Step 11. Current word gives the optimal tour sequence with respect to the cost matrix used for solution. If the transposed matrix is used for solution, then take reverse of the tour sequence as the optimal tour sequence with respect to the original cost matrix and then stop.

5. Computational Experience

The simple lexisearch algorithm (LSA) and data-guided lexisearch algorithm (DGLSA) have been encoded in Visual C++ on a Pentium IV personal computer with speed 3 GHz and 448 MB RAM under MS Windows XP operating system. We have selected asymmetric TSPLIB [25] instances of size from 17 to 71. We report the solutions that are obtained within four hours on the machine as well as total computational times (TotTime) in seconds. We also report percentage of error of the solutions obtained by the algorithms. The percentage of error is given by the formula , where BestSol denotes the best/optimal solution obtained by the algorithms and OptSol denotes the optimal solution reported in TSPLIB.

Table 5 presents the comparative study of DGLSA using two different preprocessing methods, Process 1 and Process 2. It is seen from the table that the solutions obtained by both methods are same. Also, except for the instance ftv33, computational times are same. For ftv33, Process 2 is found to be better than Process 1. Hence, we consider DGLSA using Process 2 for comparison with other algorithms.

For the purpose of performance comparison of our algorithms, the implementation of lexisearch algorithm of Pandit and Srinivas [9], we named as PSA, reported in Srinivas [21], is encoded in Visual C++ and run on same machine and tested on same data set. The results are shown is Table 6. Out of thirteen instances only four, five, and five instances were solved optimally within four hours by PSA, LSA, and DGLSA, respectively. For the remaining instance, except for three instances—ry48p, ftv64, and ftv70, LSA finds better solution than PSA. On the other hand, except for two instances, ftv47 and ft53, DGLSA finds better solution than LSA. On average, PSA, LSA, and DGLSA find solutions which are 7.70%, 5.84%, and 5.12% away from the exact optimal solutions. This tells us that among the three algorithms our data-guided algorithm is the best.

Table 6 shows that, on the basis of total computational time, except for br17 and ftv35, LSA takes less time than PSA to solve the instances. On the other hand, DGLSA takes time that is less than or equal to the time taken by LSA for all instances. Table 6 also reports the computational time when the final solution is seen for the first time (FirstTime). In fact, a lexisearch algorithm first finds a solution and then proves the optimality of that solution, that is, all the remaining subproblems are discarded. The table shows that, on average computational time, PSA, LSA, and DGLSA find a solution within at most 50%, 39%, and 30% of the total computational time, respectively. That is, PSA, LSA, and DGLSA spend at least 50%, 61%, and 70% of total computational time on proving the solutions. Therefore, for these TSPLIB instances, PSA spends a relatively large amount of time on finding solution compared to LSA, and LSA spends a relatively large amount of time on finding solution compared to DGLSA. So, a small number of subproblems are thrown by PSA in comparion to our algorithms. On the other hand, DGLSA requires largest amount of time on proving the solutions, and hence, a large number of subproblems are thrown. On the basis of computational time also, LSA is found to be better than PSA, and DGLSA is found to be the best one. There is good improvement of DGLSA over LSA for the instances in terms of solution quality and computational time. So, our goal is achieved very well.

Many recent good approximation methods based on branch and bound have been reported in the literature, but the nature of those algorithms are not same as our algorithms. Hence we cannot compare our algorithms with those algorithms. However, the performance of DGLSA is compared to the performance of integer programming formulation by Sherali et al. [11] (called ATSP6 therein) that was run on Dell Workstation PWS 650 with double 2.5 GHz CPU Xeon processors, 1.5 GB RAM under Windows XP operating system and implemented using AMPL (version 8.1) along with CPLEX MIP Solver (version 9.0). The same formulation was also implemented by Öncan et al. [14] (called SST2 therein) on a Pentium IV PC with 3 GHz CPU using barrier solver of CPLEX 9.0. Unfortunately, a direct comparison on the same computer cannot be made, since, to the best of our knowledge, no online source code is available. However, our machine is around 1.5 times faster than the machine used for ATSP6 and is approximately same as machine used for SST2 (see machine speed comparison from Johnson [26]). Table 7 reports results for six asymmetric TSPLIB instances by DGLSA, and computational times (in seconds), solutions (lower bounds), and percentage of error of ATSP6 and SST2. The percentage of error is given by the formula , where BestSol denotes the best solution (lower bound) obtained by ATSP6 and SST2.

From Table 7, it is seen that for br17, DGLSA takes more computational time than ATSP6. For the remaining instances DGLSA takes least computational time. Out of six instances, two and five instances were solved optimally by DGLSA and ATSP6 respectively. On average, DGLSA is the best which finds solution that is 0.04% away from the exact optimal solution. As a whole, DGLSA is found to be best. Also, solutions by lexisearch algorithms do not rely on commercial math software.

We also run the lexisearch algorithms for asymmetric randomly generated instances of different sizes. We generate 20 different instances for each size. We include two statistics to summarize the results: average computational time (in seconds) and standard deviation of times. Table 8 presents results for asymmetric instances drawn from uniform distribution of integers in the interval .

On the basis of average computational times, Table 8 shows that for , LSA is the best and PSA is the worst, and there is no improvement in average computational time and time variations by DGLSA over LSA, of course, DGLSA is better than PSA. For (35 and 40), LSA is better than PSA, and there is some improvement in average times by DGLSA over LSA, and hence, DGLSA is the best. For (45 and 50), PSA is better than LSA, and there is very good improvement in average times by DGLSA over LSA, and DGLSA is found to be the best. Overall there is an improvement of more than 41% on average computational time by DGLSA over LSA. Of course, for few problems DGLSA takes more times than LSA. We investigated in this direction, but we could not come to any conclusion in this regard. To visualize more clearly the trends of different algorithms as size of the problem increases, we present, in Figure 1, curves of average computational (solution) times by the algorithms against increasing sizes. It clear from Figure 1 that as the size increases PSA is performing better than LSA, and DGLSA is performing best.

On the basis of time deviations, Table 8 shows that PSA is the worst and DGLSA is the best. PSA shows large variations in the context of computational time for different instances of same size, and the problems seem to fall into two distinct groups with a big “gap” between the two groups, which is similar to the observation made by Ahmed [22]. As the size increases the variations amongst the computational times increase rapidly. LSA shows lesser variances than PSA, and DGLSA shows lowest variances. In fact, one of objectives of LSA was to reduce the gap amongst the computational times over PSA, which is achieved to some extent, and there is an improvement of more than 18% on the overall time variance by LSA over PSA. To reduce more in time variances, DGLSA is developed, and there is an improvement of around 38% and 50% on the overall time variance over LSA and PSA, respectively. We also present, in Figure 2, curves of standard deviations amongst computational times by the algorithms against increasing sizes. It clear from Figure 2 that as size increases, time variances by PSA are increasing rapidly than by LSA and DGLSA, and the trend of increment is lowest for DGLSA. Of course, for small sized problems variation by DGLSA is high.

To prove further that performance of DGLSA is the best for any asymmetric instance, we consider the instances drawn from uniform distribution of integers in different intervals. Table 9 presents the average computational times and standard deviations of the times for the values of n from 30 to 45 for the instances drawn from uniform distribution of integers in the intervals , , , , and . The performance of the DGLSA does not change very much when the cost ranges increase from to . On the basis of the average computational times, Table 9 shows that LSA is better than PSA, and DGLSA is the best for the instances drawn in the intervals and ; but for the instances drawn in the intervals and , DGLSA is better than LSA, and PSA is the best. For the instances drawn in the interval , there is less difference among the performance of different algorithms, and DGLSA is the best. However, on the basis of overall average computational times and variations in times, PSA is better than LSA, and DGLSA is the best. Asymmetric TSPLIB instances can be considered “hard”, that is, even small instances take large computational times and take more times than for asymmetric random instances.

Further, to show that LSA and DGLSA are better than PSA, we run twenty symmetric TSPLIB instances of size from 14 to 76 and report computational times and solutions within four hours of computational times in Table 10. Since the condition for taking transpose matrix is not satisfied for symmetric instances, the results obtained by LSA and DGLSA are same, and hence, we report results by LSA only. In fact applying DGLSA makes no sense for symmetric instances because there is no any computational benefit in solving transposed cost matrix. It is to be noted that lexisearch algorithms for the problem do not require any modification for solving different type of instances. Out of twenty instances, nine and ten instances were solved optimally by PSA and LSA, respectively. For the remaining instances, except for two instances—gr48 and brazil58, LSA finds better solution than PSA. On average, PSA and LSA find optimal solution 8.45% and 4.70% away from the exact optimal solutions, respectively. Regarding computational time, except for two instances gr24 and gr48, LSA finds better final solutions quicker than PSA. On average, PSA and LSA find final solution within 31% and 37% of the total computational times, respectively. That is, PSA and LSA spend 69% and 63% of total time on establishing final solutions. From this computational experience, we can conclude that for the TSPLIB as well as random instances, DGLSA is the best among all algorithms considered in this study, and it finds optimal solution quickly for asymmetric instances.

6. Conclusion

We presented simple and data-guided lexisearch algorithms that use path representation method for representing a tour for the benchmark asymmetric traveling salesman problem to obtain exact optimal solution to the problem. The performance of our algorithms is compared against lexisearch algorithm of Pandit and Srinivas [9] and solutions obtained by integer programming formulation of Sherali et al. [11] for some TSPLIB instances. It is found that our data-guided algorithm is very effective for these instances. For random asymmetric instances also, the data-guided lexisearch algorithm is found to be effective, and the algorithm is not much sensitive to the changes in the range of the uniform distribution.

For symmetric TSPLIB instances our simple algorithm is found to be better than the algorithm of Pandit and Srinivas [9]. However, the proposed data-guided algorithm is not applicable to symmetric instances, and hence, we do not report any computational experience for random symmetric instances. In general, a lexisearch algorithm first finds an optimal solution and then proves the optimality of that solution. The lexisearch algorithm of Pandit and Srinivas [9] spends a relatively large amount of time on finding optimal/best solution compared to our algorithms.

We have investigated that using our proposed algorithms asymmetric TSPLIB instances could be solved of maximum size 45, whereas asymmetric random instances could be solved of maximum size 50 within four hours of computational time. This shows that asymmetric TSPLIB instances are harder than asymmetric random instances. For symmetric instances, our data-guided algorithm is not applicable, hence we could not draw any conclusion on these instances. However, we could only find optimal solution and could not prove optimality of the solution, using our simple algorithm of maximum size 42. Of course, we did no modification in the algorithm for applying on different types of instances.

Though we have proposed a data-guided module of the lexisearch algorithm, still for some instances it takes more times than the simple lexisearch algorithm, and it is applicable only for the asymmetric instances. So a closer look at the structure of the instances and then developing a more sophisticated data-guided module may apply on symmetric instances, and may further reduce the computational time and provide better solutions for large instances. Another direction of the research is to propose a better lower bound technique which may reduce the solution space to be searched, and hence reduce the computational time.

Acknowledgments

The author wishes to acknowledge Professor S. N. Narahari Pandit, Hyderabad, India for his valuable suggestions and moral support. This research was supported by Deanery of Academic Research, Al-Imam Muhammad Ibn Saud Islamic University, Saudi Arabia via Grant no. 280904. The author is also thankful to the anonymous honorable reviewer for his valuable comments and suggestions.