Abstract

The ordered clustered travelling salesman problem is a variation of the usual travelling salesman problem in which a set of vertices (except the starting vertex) of the network is divided into some prespecified clusters. The objective is to find the least cost Hamiltonian tour in which vertices of any cluster are visited contiguously and the clusters are visited in the prespecified order. The problem is NP-hard, and it arises in practical transportation and sequencing problems. This paper develops a hybrid genetic algorithm using sequential constructive crossover, 2-opt search, and a local search for obtaining heuristic solution to the problem. The efficiency of the algorithm has been examined against two existing algorithms for some asymmetric and symmetric TSPLIB instances of various sizes. The computational results show that the proposed algorithm is very effective in terms of solution quality and computational time. Finally, we present solution to some more symmetric TSPLIB instances.

1. Introduction

The clustered travelling salesman problem (CTSP), introduced by Chisman [1], is a variation of the usual travelling salesman problem (TSP). It can be defined as follows: let be a complete undirected graph with vertex set and edge set . The vertex set , except the starting vertex (depot) , is partitioned into prespecified clusters . The number of vertices in the clusters (i.e., size of the clusters) is , respectively. A cost matrix representing travel costs, distances, or travel times is defined on the edge set . Starting from the depot , the objective of the CTSP is to determine the least cost Hamiltonian tour on in which the vertices of any cluster are visited contiguously, and the clusters are visited in the order .

There are several variants of the problem depending on whether the start and end vertices of a cluster as well as the number and order of clusters have been specified. If the number of clusters is either one or each cluster has only one vertex, then the problem becomes the usual TSP. If the number of clusters is two then the problem is called TSP with backhauls (TSPB) [2]. In the free CTSP, the cluster order is not prespecified and the problem is to simultaneously determine the optimal cluster order as well as the routing within and between clusters. This paper focuses on the variant with specified order of clusters and unspecified end vertices of the clusters, which is called ordered CTSP (OCTSP). For simplicity, we label the vertices as natural numbers from 1 to and, thus, assume that the label of the vertices of any cluster is less than the label of the vertices of the following clusters.

Since all the variations are generalization of the usual TSP, they all are NP-hard [3]. The CTSP has many applications in real life, for example, in automated warehouse routing [1, 4], in production planning [4], and in vehicle routing, manufacturing, computer operations, examination timetabling, cytological testing, and integrated circuit testing [5, 6]. Chisman [1] showed that the CTSP can be transformed into a TSP by adding or subtracting an arbitrarily large constant to or from the cost of each intercluster edge. Therefore, after the transformation, any exact algorithm for the TSP can be applied to solve the problem exactly. However, as the size increases finding exact optimal solution to the CTSP instances becomes impractical, and hence, heuristic must be used.

We seek approximate solution using heuristic algorithm for the OCTSP. For the TSP and related problems, well-known heuristic algorithms are genetic algorithms (GAs), tabu search (TS), artificial neural network (ANN), simulated annealing (SA), approximate algorithms, and so forth. Amongst the heuristics, GAs are found to be the best algorithms for the TSP and its variations. Since OCTSP is a variation of the usual TSP, therefore, we develop a hybrid GA (HGA) using sequential constructive crossover [7] and 2-opt search and a local search [8] to obtain heuristically optimal solution to the problem. The efficiency of our algorithm has been examined against partitioning algorithm [9] for some medium sized asymmetric TSPLIB [10] instances and lexisearch algorithm [11] for some small sized symmetric TSPLIB [10] instances. The computational experiments show the effectiveness of our proposed HGA. Finally, we present solution to some medium sized symmetric TSPLIB [10] instances. However, to the best of our knowledge, no literature presents solution to these symmetric instances. Hence, we could not provide any comparative study of these results.

This paper is organized as follows. Section 2 presents a detailed literature review to the problem. A hybrid genetic algorithm is developed and reported in Section 3. Computational experiment for the algorithm is presented in Section 4. Finally, Section 5 presents comments and concluding remarks.

2. Literature Review

Chisman [1] transformed the CTSP to the usual TSP and then applied branch and bound approach [12] to solve the problem exactly but did not obtain very good results. Thereafter, Lokin [4] and Jongens and Volgenant [13] applied exact algorithms to find exact optimal solution to the problem. Aramgiatisiris [9] developed an exact partitioning algorithm (LBDCOMP therein) by transforming the problem to the TSPB, then solving independently linehaul and backhaul subproblems, and finally reformulating as a direct shortest path on the bipartite graph problems. However, the algorithm does not really obtain exact solutions for many instances [11].

An approximation algorithm with good empirical performance [14] was developed to solve the problem with a prespecified order on the clusters. Also, three more heuristics were proposed and compared among them. As reported, the best results were obtained by the heuristic that first transforms the problem into a TSP and then applies the GENIUS heuristic.

Laporte et al. [5] proposed a TS heuristic that combined with a phase of diversification using a GA to solve the problem with a prespecified order of visiting the clusters. As reported, the TS outperforms the GA [15] that exploits order-based crossover operators and local search heuristics. However, when comparing TS with a postoptimization procedure [14], TS obtained better quality of solutions but required more computational time.

Another GA was developed for the problem [16] that first finds intercluster paths and then intracluster paths. Finally, a comparative study of the GA was presented against a GENIUS heuristic [14] and lower bounds [13]. As reported, the GA could solve instances up to 500 vertices with 4 and 10 clusters and obtained solutions within 5.5% of the lower bound.

An approximation algorithm of 5/3 performance ratio has been developed for the OCTSP with a prespecified visiting sequence for the clusters [17]. Another approximation algorithm has been proposed which guarantees bounded performance for some variants of the CTSP [3]. For the problem with unspecified end vertices, its algorithm first uses a modified Christofides’ algorithm [18] to get the shortest free ends Hamiltonian paths in each cluster. After the first step, the two end vertices for each cluster and the intracluster paths are specified. Then a rural postman problem algorithm is used to connect all the intracluster paths to form a whole tour. This algorithm favours the intracluster Hamiltonian paths, which implies that the inter-cluster paths may be sacrificed when the end vertices in each cluster are already determined.

A two-level-TSP hierarchical algorithm that favours intercluster paths has been proposed for the CTSP [19]. First, the shortest intercluster paths connecting every cluster have been specified; then the start and end vertices are specified for each cluster. Next, a modified Christofides’ algorithm [18] is used to get the shortest Hamiltonian paths with two specified end vertices in each cluster. At the end, a whole tour is formed by combining the paths generated in both levels. They also showed that the penalties caused by favoring the intracluster Hamiltonian paths and the intercluster paths are comparable.

A two-level GA (TLGA) has been developed for solving CTSP with unspecified end vertices [20]. The algorithm first finds the shortest Hamiltonian cycle for each cluster and then connects all the intracluster paths in a certain sequence to form a whole tour. In the lower level, a GA is used to find the shortest Hamiltonian cycle rather than the shortest Hamiltonian path for each cluster. In the higher level, a modified GA is designed to determine an edge that will be deleted from the shortest Hamiltonian cycle for each cluster and the visiting sequence of all the clusters with the objective of shortest travelling tour for the whole problem. The higher level algorithm has the freedom to delete any edge of the clusters while searching for the shortest complete tour. Test results demonstrate that the TLGA for large TSPs is more effective and efficient than the classical GA.

3. A Hybrid Genetic Algorithm for the OCTSP

3.1. A Brief Overview of GAs

GAs are structured, yet randomized, search methods based on mimicking the survival of the fittest among the species generated by random changes in the gene structure of the chromosomes in the evolutionary biology [21]. They start with a population of chromosomes (solutions) that evolve from one generation to the next. Each generation consists of the following three operations.(a)Selection. This procedure is a stochastic process that mimics the “survival-of-fittest” theory. However, here no new chromosome is created. Some of the chromosomes are copied (even more than once) to the next generation probabilistically based on their objective function value, whereas some other chromosomes are discarded.(b)Crossover. It is a binary operator that applies to two parent chromosomes with a large probability, which creates new offspring chromosome(s). It is a very important operator in GAs. Also, crossover operator together with selection operator is found to be the most powerful process in the GA search.(c)Mutation. It is a unary operator that applies to each of the chromosomes with a small probability. It is the occasional random change of some selected gene(s) of a chromosome to diversify the GA search space.

Starting from a randomly generated or heuristically generated initial population, the GAs search repeated the above three operators until the stopping criterion is satisfied. Crossover operator is a unique feature of GAs that wishes to combine good quality parent chromosomes to create one or more new offspring chromosome(s). However, it is seen that the crossover alone cannot generate high quality chromosomes for the combinatorial optimization problems, like the TSP and its variations. Thus, powerful local search methods are incorporated to improve the quality of offspring chromosomes [5]. In hybrid GAs, crossover operator generates new starting solutions for the local search methods.

GAs are found to be successful heuristic algorithms for solving the usual TSP and its variations. However, GAs do not guarantee the optimality of the solution, but they can find very good, near optimal solution in very short time. We are applying crossover, mutation, and local search methods for each cluster in the prespecified order for the OCTSP. Result of the GA for a 7-vertex problem instance is a complete tour as shown in Figure 1.

3.2. Bias Removal

Bias removal step is adopted in lexisearch algorithm [11] and found effective for the CTSP. We also consider the bias removal step in our GA. The main advantage of the bias removal is that a large amount of the solution value is kept fixed, and for the remaining small value we have to search. The process for bias removal of the cost matrix is as follows: subtract each row-minima from its corresponding row elements, repeat the same column-wise on the resultant matrix. The total of the row-minima and the subsequent column-minima is called the “bias” of the matrix. However, we have not incorporated clusters precedence relations in our cost matrix. This does not affect the value of a chromosome, since, while generating a chromosome, the clusters precedence relations are taken care of.

The bias of the cost matrix given in Table 1 is (row-minima + column-minima = 129 + 19 =) 148. The reduced cost matrix (i.e., after removing bias of the matrix) is shown in Table 2. We shall solve the problem with respect to the reduced cost matrix. After we find solution value with respect to the reduced matrix, we shall add the bias to the value for finding the solution value with respect to the original cost matrix.

3.3. Alphabet Table

Alphabet matrix, , is a square matrix of order formed by positions of elements of the reduced cost matrix of order , , when they are arranged in the nondecreasing order of their costs. Alphabet table “” is the combination of elements (vertices) of matrix A and their costs in the reduced matrix [11]. The alphabet table for the reduced cost matrix in Table 2 is shown in Table 3. This alphabet table also is not taking care of the clusters precedence relations.

3.4. Improved Initial Population

The path representation for a chromosome is used in our GA. In this representation, any vertex is assigned to a unique natural number from 1 to ; that is, genes are natural numbers. The path of a salesman is represented by a chromosome which is a permutation of number genes. A gene segment is defined as a permutation of the vertices in a cluster. A chromosome is a permutation of all the gene segments with one gene segment per cluster. For example, let be the vertices with , , and is followed by , in a 7-vertex instance; then, starting from vertex 1, a complete tour {1→3→4→2→6→7→5→1} may be represented as the chromosome (1, 3, 4, 2, 6, 7, 5), where (3, 4, 2) and (6, 7, 5) are the gene segments for cluster 1 and cluster 2, respectively.

It is to be noted that starting from a good initial population can deliver better quality of solutions quickly, and that is why many literatures report generating initial population using heuristics. Hence, we are going to use sequential sampling algorithm for heuristically generating initial population that has been applied successfully to the bottleneck TSP [22]. This algorithm is a simple version of the sequential constructive sampling algorithm [8]. It is basically a probabilistic method to generate a tour of the salesman. The probability of visiting each unvisited vertex of a cluster in a row of the alphabet table is assigned in such a way that the first unvisited vertex gets more probability than the second one, and so on. Thereafter, cumulative probability of each unvisited vertex of a cluster is calculated. Next, a random number, , is generated and the vertex that represents the chosen random number in the cumulative probability range is accepted. The probability of visiting each unvisited vertex of a cluster is assigned as follows. Suppose the number of unvisited vertices of a cluster in a row of the alphabet table is . The probability of visiting the th unvisited vertex is

The algorithm may be summarized as follows.

Step 0. Construct the “alphabet table” based on the reduced cost matrix. Repeat the following steps for the fixed population size .

Step 1. Since “vertex 1” is the starting vertex, so, initialize and go to Step 2.

Step 2. Go to the th row of the “alphabet table” and visit probabilistically, by using (1), any unvisited vertex of the row (say vertex ) in the present cluster and go to Step 3.

Step 3. Rename the “vertex ” as “vertex ” and go to Step 4.

Step 4. If all vertices of the present cluster are visited then go to the next cluster in the order (if any) and go to Step 5; else go to Step 2.

Step 5. If all vertices of the network are visited then go to Step 1 for generating another chromosome in the population; else go to Step 2.

Let us illustrate the algorithm through the example given in Table 1 with , , and is followed by . We start from 1st row of the “alphabet table.” The number of unvisited vertices of the 1st cluster in the row is 3, which are 4, 2, and 3, with cumulative probabilities 0.500, 0.833, and 1.000, respectively. Suppose the vertex 4 is selected randomly; then the partial chromosome will be (1, 4). Next we go to 4th row of the “alphabet table” and probabilistically select another node, and so on. Proceeding in this way it may lead to a complete chromosome (1, 4, 2, 3, 6, 5, 7).

A preliminary study shows the effectiveness of the sampling algorithm for initial population. However, instead of considering all unvisited vertices if we consider at most first ten vertices in a cluster then the algorithm generates very good population. A similar observation has been reported for the bottleneck TSP also [22]. Hence, we consider this restricted domain of unvisited vertices of a cluster for our study. Further, to start with better population, we apply 2-opt search to each chromosome. The 2-opt search removes two edges and then replaces them by a different set of edges in such a way so as to maintain the feasibility of the tour. Let , , , and be four vertices in a cluster; then if the edges () and () are removed, the only way to form a new valid tour is to connect to and to .

3.5. Fitness Function and Selection Method

The objective function of each chromosome in the population is the cost of the tour represented by the chromosome. The fitness function of a chromosome is defined as multiplicative inverse of the objective function. There are various selection methods in the literature. The selection operation considered for our study is the stochastic remainder selection method [23].

3.6. The Sequential Constructive Crossover Operation

Since crossover operation plays main role in GAs, hence, several crossover operators have been proposed for the usual TSP, which are then used for the variant TSPs also. Out of them, the sequential constructive crossover (SCX) [7] is found to be one of the best crossover operators for the usual TSP. A multiparent extension of the SCX has been applied to the usual TSP and found good results [24]. The SCX has also been successfully applied to the TSP with some other variations [22, 25]. In general, it produces an offspring using better edges of the parents. However, it does not depend only on the parents’ structure; it sometimes introduces new, but good, edges to the offspring, which are not even available in the present population. We modify the SCX operator for the OCTSP as follows.

Step 1. Start from “vertex 1” (i.e., current vertex ).

Step 2. Sequentially search both of the parent chromosomes and consider the first unvisited vertex of the present cluster appearing after “vertex ” in each parent. If no unvisited vertex after “vertex ” is available in any (or both) of the parents, search sequentially from the starting of that parent and consider the first unvisited vertex of the cluster, and go to Step 3.

Step 3. Suppose the “vertex ” and the “vertex ” are found in the 1st and 2nd parents respectively, then for selecting the next vertex in the offspring chromosome go to Step 4.

Step 4. If , then select “vertex ”; otherwise, select “vertex ” as the next vertex and concatenate it to the partially constructed offspring chromosome and go to Step 5.

Step 5. If there is not any vertex left in that cluster, then go to the next cluster, if any. If the offspring is a complete chromosome, then stop; otherwise, rename the present vertex as “vertex ” and go to Step 2.

Let a pair of parent chromosomes be : (1, 2, 4, 3, 6, 7, 5) and : (1, 3, 2, 4, 6, 5, 7) with costs 357 and 354, respectively, with respect to the original cost matrix given in Table 1. By applying above SCX, we obtain the offspring chromosome (1, 2, 4, 3, 6, 5, 7) with cost 318, which is less than both parents. The parent and the offspring chromosomes are shown in Figure 2. In general, crossover operator inherits parents’ characteristics, and the operator that preserves good characteristics of parents in the offspring is said to be good operator. The SCX is found to be excellent in this regard. Bold edges in Figure 2(c) are the edges which are available either in the first parent or in second parent. For this given example, all edges are selected from either of the parents.

For the crossover operation, a pair of parent chromosomes is selected sequentially from the mating pool. It is reported that the SCX gets stuck in local minimums quickly for the TSP [7], which is very often due to the identical population. So, to overcome this situation, the selected parents are checked for duplication. If the selected parents are found to be identical, then the second parent is modified temporarily by swapping some randomly chosen pair of genes in the chromosome, and then the crossover operation is performed. To improve quality of the solution as well as have a mixture of parents and offspring in a population, the first parent is replaced by the offspring only if the offspring value is better than the average value of the present population. In this way, the mixed population retains diversity also. To further improve the quality of the solution obtained by crossover, many researchers applied 2-opt search operator. To improve solution quality, we are going to use a local search method that combines three mutation operators that will be discussed in Section 3.8. However, we are not applying this local search method to all of the offspring; rather, it is applied only to the offspring if its value is better than the average population value. Now, since our crossover operator produces only one offspring, to keep population size fixed throughout the generations, while pairing with the next chromosome in order, the present second original parent chromosome will be considered as the first parent, and so on.

3.7. Mutation Operation

The mutation operator randomly selects a position in the chromosome and changes the corresponding gene, thereby modifying information. The need for mutation comes from the fact that, as the less fit chromosomes of successive generations are discarded, some aspects of genetic material could be lost forever. By performing occasional random changes in the chromosomes, GAs ensure that new parts of the search space are reached, which selection and crossover could not fully guarantee. In doing so, mutation ensures that no important features are prematurely lost, thus maintaining the mating pool diversity. For this investigation, we have considered reciprocal exchange mutation operator that selects two genes randomly of a chromosome in every cluster and swaps them. The probability of mutation is usually chosen to be considerably less than the probability of crossover. So, mutation plays a secondary role in the GA search. For example, let the chromosome (1, 2, 4, 3, 6, 7, 5) be selected for mutation, and vertices 2 and 4 are swapped in cluster 1, and vertices 7 and 5 are swapped in cluster 2; then the mutated chromosome becomes (1, 4, 2, 3, 6, 5, 7) which is shown in Figure 3. Bold edges in Figure 3(b) are the new edges in the mutated chromosome.

3.8. A Local Search Method

We have considered the combined mutation operation as a local search method which has been successfully applied to the bottleneck TSP [8, 22] and maximum TSP [25]. It combines three mutation operators: insertion, inversion, and reciprocal exchange, with cent percentage of probabilities. Insertion operator selects a vertex (gene) in a chromosome and inserts it in a random place, and inversion operator selects two points along the length of a chromosome and reverses the subchromosomes between these points. This local search, a modification of the hybrid mutation operator [26], is applied to a chromosome. Recall that sizes of the clusters are , respectively. Suppose is a chromosome; then the local search for the OCTSP can be developed as follows.

Step 0. Set and .

Step 1. For to perform Step 2.

Step 2. Set and go to Step 3.

Step 3. For to perform Step 4.

Step 4. For to perform Step 5.

Step 5. If inserting vertex after vertex reduces the present tour cost, then insert the vertex after vertex . In either case go to Step 6.

Step 6. If inverting subchromosome between the vertices and reduces the present tour cost, then invert the subchromosome. In either case go to Step 7.

Step 7. If swapping the vertices and reduces the present tour cost, then swap them. In either case go to Step 8.

Step 8. Set and go to Step 1.

3.9. Immigration

It is seen that sometimes GAs get stuck in local minimums for the combinatorial optimization problems, which is very often due to the identical population. So, to improve capability of GAs, the population should be diversified. To diversify the population, immigration method is also adopted, where some randomly selected chromosomes are replaced by new chromosomes after some generations [22]. We are also considering an immigration method. For our investigation, 20% of the population is replaced randomly using sequential sampling algorithm, as discussed in Section 3.4, if no improvement is found within the last 20 generations. Once the immigration is applied, we wait for the next 20 generations for any improvement. Our hybrid GA (HGA) for the OCTSP may be summarized as in Figure 4 [22].

4. Results and Discussions

We encoded our HGA in Visual C++, executed on a PC with 3.40 GHz Intel(R) Core (TM) i7-3770 CPU and 8.00 GB RAM under MS Windows 7 operating system, and tested with some TSPLIB [10] instances.

4.1. Parameter Setting

GAs are well suited for the combinatorial optimization problems. They find near optimal solution in reasonable time. However, they are guided by suitable choice of parameters, namely, crossover probability , mutation probability , population size , and termination condition. Successful working of GAs depends on a proper selection of these parameters [23]. But, there is not any intelligent rule to set these parameters. In general, various sets of the parameters are tested, and then the best one is selected. We are also following a similar method. So, we set the parameters as follows: a maximum of 20,000 generations as termination condition, 20 as population size, 1.00 (100%) as crossover probability, and 20 independent runs for each setting. However, we are not reporting our experiments except for the mutation probability.

To set mutation probability, six mutation probabilities, 0.00, 0.01, 0.02, 0.03, 0.04, and 0.05, are considered and tested on five asymmetric TSPLIB instances with four clusters (, , , and ) for each of the instances ftv110, ftv120, ftv130, ftv140, and ftv150. For example, the 7-vertex instance with two clusters (3, 3) means , , and is followed by .

Table 4 reports the mean and standard deviation (in parenthesis) of the best solution values over 20 trials on five instances, ftv110–ftv150, for different mutation probabilities. The boldface denotes the best average solution value. It is seen that there is significant improvement of the solutions using nonzero mutation probabilities over using zero mutation probability. It shows that mutation operation also plays an important role in GAs. Mutation probabilities 0.03 and 0.04 are competing. Using , the algorithm obtains the best average solution for the instances ftv110, ftv120, and ftv150. For the remaining two instances, the algorithm obtains the best average solution at . However, if we look at the standard deviation, solutions are relatively stable at .

Figure 5 plots the average best solution values for the five instances obtained by the HGA using mutation probabilities from 0.00 to 0.05. The figure shows clearly the effectiveness of mutation operator. It is seen that, as mutation probability increases, solution quality also increases. However, after , solution quality is not found to be good. From the table and the figure, we can conclude that is suitable for our algorithm. Hence, we are going to use for our further study.

4.2. Comparative Study on Asymmetric Instances

We present a comparative study between HGA and LBDCOMP [9] for some asymmetric TSPLIB instances of sizes from 34 to 171 with various clusters and different cluster sizes. It is to be mentioned that LBDCOMP [9] is claimed to find exact optimal solution of the OCTSP instances, which has been disproved by showing results of some small sized instances [11]. Anyway, since no other literature reports the exact solution for large size instances, we are going to compare with the LBDCOMP algorithm to see solution quality by our HGA. Table 5 shows this comparative study between HGA and LBDCOMP. The table reports results by LBDCOMP, and best solution value (BestSol), average solution value (AvgSol) in 20 runs, average complete computational time (CTime), average computational time when final best solution is seen for the first time (FTime) in twenty runs and percentage of error (Error(%)) of the best solution obtained by our HGA. The percentage of error is calculated by the formula where BestSol denotes the best solution obtained by HGA and OptSol denotes the solution obtained by LBDCOMP.

It is seen from Table 5 that our HGA finds best/optimal solution of thirty-two instances at least once in twenty runs, whereas LBDCOMP could not find optimal solution for at least six instances—ftv33 with clusters (16, 17) and (9, 24), ftv35 with clusters (17, 18), ftv47 with clusters (13, 34), ftv55 with clusters (27, 28), and ftv170 with clusters (44, 42, 42, 42). That is, for these six instances solution quality by HGA is found to better. On the other hand, for five instances, namely, ftv110, ftv130, ftv140, ftv150, and ftv160, with four clusters each, solution quality by LBDCOMP is better than by our HGA. For these five instances, percentage of error by HGA is at most 0.53%. However, on average, solution quality by HGA is 0.38% better than that of by LBDCOMP.

In terms of computational time, we cannot directly compare the algorithms because they are executed in different machines, and it was not possible to access the original code of LBDCOMP. However, a large gap between computational time by LBDCOMP and HGA is seen in the table, and HGA takes much less time. Further, if FTime is considered for HGA, then definitely it is found to be much better than LBDCOMP. It is interesting to see that, for any of these instances with the same number of clusters but different cluster sizes, HGA takes different computational times, and as the size of clusters becomes more unbalanced, computational time increases. In an unbalanced clustered instance, size of the clusters is not equal. It is also seen that, on average, HGA hits final best solution for the first time within 56% of complete computational time. This shows that HGA finds best solution, on average, in the middle of the generations for these asymmetric TSPLIB instances.

4.3. Comparative Study on Symmetric Instances

Now we are going to compare our HGA with lexisearch algorithm (LSA) [11] on some small sized symmetric TSPLIB [10] instances with various clusters and different cluster sizes. It is to be noted that our HGA does not require any modification for solving different types and cases of the instances. Table 6 shows comparative study between LSA and HGA. The solution quality by HGA is found to be insensitive to the number of runs for most of the instances. HGA finds best/optimal solution of twenty-three instances at least once in twenty runs, whereas LSA could not find optimal solution for at least three instances within four hours of computational time, for example, the instances gr48 with clusters (23, 24); and eil51 with clusters (25, 25) and (16, 17, 17). Overall, for these symmetric instances solution quality by HGA is found to be better, and on average, solution quality by HGA is 0.24% better than that by LSA.

In terms of computational time, it can be easily concluded that HGA is much better than LSA, though LSA was executed on slower machine (Pentium IV PC with speed 3 GHz and 448 MB RAM). Of course, the nature of LSA and HGA is not the same; LSA gives exact optimal solution whereas HGA gives heuristic solution. It is also seen from the table that, on average, HGA hits final best solution for the first time within 13% of complete computational time. This shows that HGA finds best solution, on average, in the beginning of the generations for these instances.

4.4. Proposed Solution for Some More Symmetric Instances

Table 7 presents results for some more symmetric TSPLIB instances of sizes from 52 to 431 with various clusters and cluster sizes. Since, to the best of our knowledge, no literature presents solution for these instances, hence, we could not provide any comparative study on these instances. However, we present the results for future study of the OCTSP on these instances. For our self-comparison, we provide solution value and percentage of error (in parentheses) by our HGA for the instances with one cluster, which are, of course, usual TSP instances. Out of forty-seven usual TSP instances, HGA finds exact optimal solution to thirty-three instances. For the remaining instances, maximum percentage of error is 1.08%. That means our algorithm can provide near exact solution, if not exact. Treating this study as a base for effectiveness of the algorithm, we can conclude that the reported solutions are near exact solution, if not exact. It is also seen from the table that, for the same instances, as the number of clusters increases solution value also increases. On the other hand, as the number of clusters increases computational time decreases. In general, computational time for solving a single clustered instance (i.e., usual TSP instance) is more than its corresponding multiclustered instances. It seems that the structures of these multiclustered instances are less complex and, hence, easier than their corresponding single clustered instances. For these symmetric instances, on average, HGA hits final best solution for the first time within 43% of complete computational time. This shows that HGA finds best solution for these instances, on average, in the middle of the generations.

5. Conclusions

We presented a hybrid genetic algorithm using sequential constructive crossover, 2-opt search, a local search, and an immigration method to obtain heuristic solution to the OCTSP. We have used a sequential sampling method for generating initial population. The efficiency of the hybrid GA to the problem has been examined against the exact partitioning algorithm (LBDCOMP) [9] for some asymmetric TSPLIB instances and the lexisearch algorithm (LSA) [11] for some small sized symmetric TSPLIB instances. The computational experiments show that our HGA is efficient in producing high quality of solution for the benchmark instances. On the basis of solution quality, our HGA is found to be better than the LBDCOMP and LSA. In terms of computational time also, our algorithm is found to be the best one. Finally, we present solution to the problem for some more symmetric TSPLIB instances. Since, to the best of our knowledge, no literature presents solution for these instances, we could not confirm the quality of our solutions for the instances. However, for the symmetric instances of size up to 51, we found that our HGA obtains exact optimal solution to the instances. It is to be noted that HGA does not require any modification for solving different types of TSPLIB instances.

For any instance, as the number of clusters increases the solution value also increases. Computational time for solving a single clustered instance (i.e., usual TSP instance) is more than that for solving its corresponding multiclustered instances. For any multiclustered instance, as the clusters become more unbalanced computational time increases.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The author is very much thankful to the honourable reviewers for their constructive comments and suggestions. This research was supported by the NSTIP Strategic Technologies program no. 10 in the Kingdom of Saudi Arabia via Award no. 11-INF1788-08. The author is also very much thankful to the NSTIP for its financial and technical supports.