Abstract

This paper solves the dynamic traveling salesman problem (DTSP) using dynamic Gaussian Process Regression (DGPR) method. The problem of varying correlation tour is alleviated by the nonstationary covariance function interleaved with DGPR to generate a predictive distribution for DTSP tour. This approach is conjoined with Nearest Neighbor (NN) method and the iterated local search to track dynamic optima. Experimental results were obtained on DTSP instances. The comparisons were performed with Genetic Algorithm and Simulated Annealing. The proposed approach demonstrates superiority in finding good traveling salesman problem (TSP) tour and less computational time in nonstationary conditions.

1. Introduction

A bulk of research in optimization has carved a niche in solving stationary optimization problems. As a corollary, a flagrant gap has hitherto been created in finding solutions to problems whose landscape is dynamic, to the core. In many real-world optimization problems a wide range of uncertainties have to be taken into account [1]. These uncertainties have engendered a recent avalanche of research in dynamic optimization. Optimization in stochastic dynamic environments continues to crave for trailblazing solutions to problems whose nature is intrinsically mutable. Several concepts and techniques have been proposed for addressing dynamic optimization problems in literature. Branke et al. [2] delineate them through different stratifications, for example, those that ensure heterogeneity, sustenance of heterogeneity in the course of iterations, techniques that store solutions for later retrieval and those that use different multiple populations. The ramp up in significance of DTSP in stochastic dynamic landscapes has, up to the hilt, in the past two decades attracted a raft of computational methods, congenial to address the floating optima (Figure 1). An in-depth exposition is available in [3, 4]. The traveling salesman problem (TSP) [5], one of the most thoroughly studied NP-hard theory in combinatorial optimization, arguably remains a main research experiment that has hitherto been cast as an academic guinea pig, most notably in computer science. It is also a research factotum that intersects with a wide expanse of research areas; for example, it is widely studied and applied by mathematicians and operation researchers on a grand scale. TSP’s prominence ascribe to its flexibility and amenability to a copious range of problems. Gaussian process regression is touted as a sterling model on account of its stellar capacity to interpolate the observations, its probabilistic nature, versatility, practical and theoretical simplicity. This research lays bare a dynamic Gaussian process regression (DGPR) with a nonstationary covariance function to give foreknowledge of the best tour in a landscape that is subject to change. The research is in concert with the argumentation that optima are innately fluid, cognizant that size, nature, and position are potentially volatile in the lifespan of the optima. This skittish landscape, most notably in optimization, is a cue for fine-grained research to track the moving and evolving optima and provide a framework for solving a cartload of pent-up problems that are intrinsically dynamic. We colligate DGPR with nearest neighbor (NN) algorithm and the iterated local search, a medley whose purpose is to refine the solution. We have arranged the paper in four sections. Section 1 is limited to introduction, Section 2’s ambit includes review of all methods that form the mainspring of this work, which include Gaussian process, TSP, and DTSP. We elucidate DGPR for solving the TSP in Section 3. Section 4 discusses results obtained and draws conclusion.

2. The Traveling Salesman Problem (TSP)

The first researcher, in 1932, considered the traveling salesman problem [7]. Menger gives interesting ways of solving TSP. He lays bare the first approaches which were considered during the evolution of TSP solutions. An exposition on TSP history is available in [810].

Basic Definitions and Notations. It is imperative to note that in the gamut of TSP, both symmetric and asymmetric aspects are important threads in its fabric. We factor them into this work through the following expressions.

Basically, a salesman traverses across an expanse of cities culminating into a tour. The distance in terms of cost between cities is computed by minimizing the path length: We provide a momentary storage, for cost distance. The distances between cities are stored in a distance matrix . For brevity, the problem can also be situated as an optimization problem. We minimize the tour length (Figure 5): The distance matrix of TSP has got certain features which come in handy in defining a set of classes for TSP [11]. If the city point, in a tour is accentuated; then drawing from Euclidean distance expression [11], we present the matrix between separate distances as Affixed to TSP are important aspects that we bring to the fore in this paper. We adumbrate a brief overview of symmetric traveling salesman problem (STSP) and asymmetric traveling salesman problem (ATSP) as follows.

STSP, akin to its name, ensures symmetry in length. The distances between points are equal for all directions while ATSP typifies different distance sizes of points in both directions. Dissecting ATSP gives us a handle to hash out solutions.

Let ATSP be expressed, subject to the distance matrix. In combinatorial optimization, an optimal value is sought, whereby in this case, we minimize using the following expression: Reference [12] formulates ATSP in integer programming zero-one variables or else it is defined as such that There are different rules affixed to ATSP, inter alia, to ensure a tour does not overstay its one-off visit to each vertex. The rules also ensure that standards are defined for subtours.

In the symmetry paradigm, the problem is postulated. For brevity, we present subsequent work with tautness: such that TSP is equally amenable to the Hamiltonian cycle [11] and so we use graphs to ram home a different solution approach to the problem of traveling salesman. In this approach, we define and . This is indicative of the graph theory. The problem can be seen in the prism of a graph cycle challenge. Vertices and edges represent and , respectively.

It is also plausible to optimize TSP by adopting both an integer programming and linear programming approaches, pieced together in [13]: We can also view it with linear programming, for example, Astounding ideas have sprouted, providing profound approaches in solving TSP. In this case, few parallel edges are interchanged. We use the Hamilton graph cycle [11] equality matrix subject to , . The common denominator of these methods is to solve city instances in a shortest time possible. A slew of approaches have been cobbled together extensively in optimization and other areas of scientific study. The last approach in this paper is to transpose asymmetric to symmetric. The early work of [14] explicates the concept. There is always a dummy city affixed to each city. The distances are the same between dummies and bona fide cities which makes distances symmetrical. The problem is then solved symmetrically thereby assuaging the complexities of NP-hard problems:

2.1. Dynamic TSP

Different classifications of dynamic problems have been conscientiously expatiated in [15]. A wide array of dynamic stochastic optimization ontology ranges from a moving morphology to drifting landscapes. The dynamic optima exist owing to moving alleles in the natural realm. Nature remains the fount of artificial intelligence. Optimization mimics the whole enchilada including the intrinsic floating nature of alleles, which provides fascinating insights into solving dynamic problems. Dynamic encoding problems were proposed by [16].

DTSP was initially introduced in 1988 by [17, 18]. In the DTSP, a salesman starts his trip from a city and after a complete trip, he comes back to his own city again and passes each city for once. The salesman is behooved to reach every city in the itinerary. In DTSP, cities can be deleted or added [19] on account of varied conditions. The main purpose for this trip is traveling the smallest distance. Our goal is finding the shortest route for the round trip problem.

Consider a city population, and , as the problem at hand where in this case we want to find the shortest path for with a single visit on each. The problem has been modeled in a raft of prisms. A graph with graph nodes and edges denoting routes between cities. For purpose of elucidation, the Euclidean distance between cities is and is calculated as follows [19]:

2.1.1. Objective Function

The predictive function for solving the dynamic TSP is defined as follows.

Given a set of different costs , the distance matrix is contingent upon time. Due to the changing routes in the dynamic setting, time is pivotal. So, it is expressed as a function of distance cost. The distance matrix has also been lucidly defined in the antecedent sections. Let us use the supposition that , and . Our interest is bounded on finding the least distance from and . In this example, as aforementioned, time, and of course, cost, , play significant roles in the quality of the solution. DTSP is therefore minimized using the following expression: From Figures 2, 3, and 4, DTSP initial route is constructed upon visiting requests carried by the traveling salesman   [20]. As the traveling salesman sets forth, different requests come about which compels the traveling salesman to change the itinerary to factor in the new trip layover demands, .

2.2. Gaussian Process Regression

In machine learning, the primacy of Gaussian process regression cannot be overstated. The methods of linear and locally weighted regression have been outmoded by Gaussian process regression in solving regression problems. Gold mining was the major motivation for this method where Krige, whom Kriging is his brainchild [21], postulated that using posteriori, the cooccurrence of gold is encapsulated as a function of space. With Krige’s interpolation mineral concentrations at different points can be predicted.

In Gaussian process, we find a set of random variables. The specifications include covariance function and mean function that parameterize the Gaussian process. The covariance function determines the similarity of different variables. In this paper, we expand the ambit of study to nonstationary covariance: In the equation,   and .

The matrices for and for are presented in (15).

GPR (Figure 6) has been extensively studied across the expanse of prediction. This has resulted into different expressions to corroborate the method preference. In this study we have a constellation of training set . The GPR model [22] then becomes subject to .

The probability density describes the likelihood for a certain value to be assumed by a variable. Given a set of observations bound by a number of parameters: In this case, bias is denoted by .

Gaussian process is analogous to Bayesian with a fractional difference [23]. In one of the computations by the Bayes’ rule [23], is the Bayesian linear model parameterized by covariance matrix and mean denoted by and , respectively: where Using posterior probability, the Gaussian posterior is presented as Also the predictive distribution, given the observed dataset, helps to model a probability distribution of an interval not estimating just a point: where by , , and . If of size is needed when is large. We rewrite as The covariance matrix is .

2.2.1. Covariance Function

In simple terms, the covariance defines the correlation of function variables at a given time. A host of covariance functions for GPR have been studied [24]. In this example, the parameters are (signal variance), (variance of bias), (noise variance), (length scale), and (roughness). However in finding solutions to dynamic problems, there is a mounting need for nonstationary covariance functions. The problem landscapes have increasingly become protean. The lodestar for this research is to use nonstationary covariance to provide an approach to dynamic problems.

A raft of functions have been studied. A simple form is described in [25]: With quadratic form, denotes the matrix of the covariance function.

3. Materials and Methods

Gaussian process regression method was chosen in this work, owing to its capacity to interpolate observations, its probabilistic nature, and versatility [26]. Gaussian process regression has considerably been applied in machine learning and other fields [2729]. It has pushed back the frontiers of prediction and provided solutions to a mound of problems, for instance, making it possible to forecast in arbitrary paths and providing astounding results in a wide range of prediction problems. GPR has also provided a foundation for state of the art in advancing research in multivariate Gaussian distributions.

A host of different notations for different concepts are used throughout this paper:(i)typically denotes the vector transpose,(ii) denotes the estimation,(iii)the roman letters typically denote what constitutes a matrix.Our extrapolation is dependent on the training and testing datasets from the TSPLIB [30]. We adumbrate our approach as follows:(a)input distance matrix between cities,(b)invoke Nearest Neighbor method for tour construction,(c)tour encoding as binary for program interpretation,(d)as a drifting landscape, we set a threshold value  , where is the tour, and the error rate for the predicatability is (e)get a cost sum,(f)determine the cost minimum and change to binary form,(g)present calculated total cost,(h)unitialize the hyperparameters (i)we use the nonstationary covariance function . Constraints realized in the TSP dataset, , distances for different cities, ,(j)calculate integrated likelihood in a dynamic regression,(k)output the predicted optimal path and its length ,(l)implement the local search method ,(m)estimate optimal tour ,(n)let the calculated route set the stage for iterations until no further need for refinement,(o)let the optimal value be stored and define the start for subsequent computations,(p)output optimal and cost .

3.1. DTSP as a Nonlinear Regression Problem

DTSP is formulated as a nonlinear regression problem. The nonlinear regression is part of the nonstationary covariance functions for floating landscapes [18]: and where , Our purpose is to define .

3.1.1. Gaussian Approximation

The Gaussian approximation is premised on the kernel, an important element of GPR.

The supposition for this research is that once is known, can be determined. By rule of thumb, the aspects of a priori (when the truth is patent, without need for ascertainment) and posteriori (when there is empirical justification for the truth or the fact is buttressed by certain experiences) play a critical role in shaping an accurate estimation. The kernel determines the proximate between estimated and nonestimated.

Nonstationarity, on the other hand, means that the mean value of a dataset is not necessarily constant and/or the covariance is anisotropicvaries with direction and spatially variant, as seen in [31]. We have seen a host of nonstationary kernels in literature as discussed in previous chapters, for example in [32],

For , For , , we ensure a positive definite function between cities for dynamic landscapes: In mathematics, convolution knits two functions to form another one. This cross relation approach has been successfully applied myriadly in probability, differential equations, and statistics. In floating landscapes, we see convolution at play which produces [31] In mathematics, a quadratic form reflects the homogeneous polynomial expressed in A predictive distribution is then defined: From the dataset, the most probable estimates are used, with the following equation:

3.2. Hyperparameters in DGPR

Hyperparameters define the parameters for the prior probability distribution [6]. We use to denote the hyperparameters. From , we get that optimizes the probability to the highest point: From the hyperparameters , we optimally define the marginal likelihood and introduce an objective function for floating matrix: and is the factor of .

In this equation the objective function is expressed as and is , is .

The nonstationary covariance is defined as follows. represents the cost of point: with After calculating the nonstationary covariance, we then make predictions [33]:

4. Experimental Results

We use the Gaussian Processes for Machine Learning Matlab Toolbox. Its copious set of applicability dovetails with the purpose for this experiment. It was titivated to encompass all the functionalities associated with our study. We used Matlab due to its robust platform for scientific experiments and sterling environment for prediction [26]. 22-city data instance were gleaned from the TSP library [34].

From the Dell computer, we set initial parameters:  . The dynamic regression is lumped with the local search method to banish the early global and local convergence issues.

For global method (GA), the following parameters are defined. Sample = 22, = 1, = 1.2, and 100 computations while SA parameters include   = 100, = 0.025, and 200 computations.

The efficacy level is always observed by collating the estimated tour with the nonestimated [3537]: The percentage of difference between estimated solution and optimal solution = 16.64%, which is indicative of a comparable reduction with the existing methods (Table 1). The computational time by GPR is 4.6402 and distance summation of 253.000. The varied landscape dramatically changes the length of travel for the traveling salesman. The length drops a notch suggestive of a better method and an open sesame for the traveling salesman to perform his duties.

The proposed DGPR (Figure 8) was fed with the sample TSP tours. The local search method constructs the initial route and 2-opt method used for interchanging edges. The local method defines starting point and all ports of call to painstakingly ensure that the loop goes to every vertex once and returns to the starting point. The 2-opt vertex interchange creates a new path through exchange of different vertices [38]. Our study is corroborated by less computation time and slumped distance when we subject TSP to predicting the optimal path. The Gaussian process runs on the shifting sands of landscape through dynamic instances. The nonstationary functions described before brings to bare the residual, the similitude, between actual and estimate. In the computations, path is interpreted as and an ultimate route as .

There are myriad methods over and above Simulated (Figure 10) Annealing and tabu search, set forth by the fecundity of researchers in optimization. The cost information determines the replacement of path in a floating turf. The lowest cost finds primacy over the highest cost. This process continues in pursuit of the best route (Figure 9) that reflects the lowest cost. In the dynamic setting, as the ports of call change, there is a new update on the cost of the path. The cost is always subject to change. The traveling salesman is desirous to travel the shortest distance which is the crux of this study (Figure 11). In the weave of this work, the dynamic facet of regression remains at the heartbeat of our contribution. The local methods are meshed together to ensure quality of the outcome. As a corollary our study has been improved with the integration of the Nearest Neighbor algorithm and the iterated 2-opt search method. We use the same number of cities; each tour is improved by 2-opt heuristics and the best result is selected.

In dynamic optimization, a complete solution of the problem at each time step is usually infeasible due to the floating optima. As a consequence, the search for exact global optima must be replaced again by the search for acceptable approximations. We generate a tour for the nonstationary fitness landscape in Figure 7.

5. Conclusion

In this study, we use a nonstationary covariance function in GPR for the dynamic traveling salesman problem. We predict the optimal tour of 22 city dataset. In the dynamic traveling salesman problem where the optima shift due to environmental changes, a dynamic approach is implemented to alleviate the intrinsic maladies of perturbation. Dynamic traveling salesman problem (DTSP), as a case of dynamic combinatorial optimization problem, extends the classical traveling salesman problem and finds many practical importance in real-world applications, inter alia, traffic jams, network load-balance routing, transportation, telecommunications, and network designing. Our study produces a good optimal solution with less computational time in a dynamic environment. A slump in distance corroborates the argumentation that prediction brings forth a leap in efficacy in terms of overhead reduction, a robust solution born out of comparisons, that strengthen the quality of the outcome. This research foreshadows and gives interesting direction to solving problems whose optima are mutable. DTSP is calculated by the dynamic Gaussian process regression, cost predicted, local methods invoked, and comparisons made to refine and fossilize the optimal solution. MATLAB was chosen as the platform for the implementation, because development is straightforward with this language and MATLAB has many comfortable tools for data analysis. MATLAB also has an extensive cross-linking architecture and can interface directly with Java classes. The future of this research should be directed to design new nonstationary covariance functions to increase the ability to track dynamic optima. Also changes in size and evolution of optima should be factored in, over, and above changes in location.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to acknowledge W. Kongkaew and J. Pichitlamken for making their code accessible which became a springboard for this work. Special thanks also go to the Government of Uganda for bankrolling this research through the PhD grant from Statehouse of the Republic of Uganda. The authors also express their appreciation to the incognito reviewers whose efforts were telling in reinforcing the quality of this work.