Abstract

In virtual private network (VPN) design, the goal is to implement a logical overlay network on top of a given physical network. We model the traffic loss caused by blocking not only on isolated links, but also at the network level. A successful model that captures the considered network level phenomenon is the well-known reduced load approximation. We consider here the optimization problem of maximizing the carried traffic in the VPN. This is a hard optimization problem. To deal with it, we introduce a heuristic local search technique called landscape smoothing search (LSS). This study first describes the LSS heuristic. Then we introduce an improved version called fast landscape smoothing search (FLSS) method to overcome the slow search speed when the objective function calculation is very time consuming. We apply FLSS to VPN design optimization and compare with well-known optimization methods such as simulated annealing (SA) and genetic algorithm (GA). The FLSS achieves better results for this VPN design optimization problem than simulated annealing and genetic algorithm.

1. Introduction

In the VPN setting the goal is to implement a logical overlay network on top of a given physical network. We consider here the optimization problem of maximizing the carried traffic in the VPN. In other words, we want to minimize the loss caused by blocking some of the offered traffic, due to insufficient capacity in the logical links.

A key feature in the VPN setting is that the underlying physical network is already given. Thus, our degree of freedom lies only in dimensioning the logical (virtual) links. However, since the given physical link capacities must be obeyed and a physical link may be shared by several logical links, we can reduce the blocking on a logical link possibly only by taking away capacity from other logical links. Therefore, we may be able to improve a logical link only by degrading others. The above described situation leads to a hard optimization problem.

Mitra et al. [1] analyzed a network loss probability caused by blocking with fixed point equations (FPEs). They derived the loss probability only based on assumption of link independence. Actual difficulty is posed by the fact that we need to model the traffic loss caused by blocking not only on isolated links, but also at the network level. This means we also need to take into account that the loss suffered on a link reduces the offered traffic of other links and vice versa, so a complex system of mutual influences arise. This situation calls for a more sophisticated machinery than blocking formulas (such as Erlang’s formula) that compute the blocking probability only for a single link viewed in isolation. A successful model that captures the considered network level phenomenon is the reduced load approximation. We review it in the next section so that we can then use it in our VPN design model.

In this paper we investigate a virtual private network (VPN) design problem. We adopt a complex model to describe the carried traffic [24]. To deal with the arising hard optimization problem, we use a new heuristic local search technique called landscape smoothing search (LSS) proposed by Lian and Faragó, authors of this paper in [5]. This study first describes the LSS heuristic method and then we modify the original LSS method to a fast landscape smoothing search (FLSS) method to overcome the slow search speed for the case when the objective function calculation is very time consuming. We apply FLSS to VPN design optimization and compare with existing methods such as simulated annealing (SA) [6, 7] and genetic algorithm (GA) [8, 9].

Basically this study consists of two parts. The first part is the proposal and the analysis of carried traffic for virtual private network (VPN). In the second part we propose the landscape smoothing search (LSS) [10] method and the fast LSS (FLSS) heuristic method and apply them to VPN optimization.

The remainder of the paper is organized as follows: Section 2 presents a reduced load approximation to model to capture the VPN carried traffic. Section 3 analyzes a nonlinear network level optimization model. The last part of Section 3 also provides the carried traffic objective function for VPN design optimization. Section 4 presents initial results with the original Landscape Smoothing Search (LSS). Section 5 presents Fast Landscape Smoothing Searching (FLSS). Section 6 presents numerical optimization results for VPN optimization and discussion of the features of three heuristic FLSS, SA and GA. Finally, Section 7 concludes the paper.

2. Reduced Load Approximation

The principle of this approach is “folklore” in traffic engineering and had been presented already in the 1960’s by Cooper and Katz [11]. Nevertheless, in-depth exact investigation was done only much later, in the papers by Kelly [2] and Whitt [4]. For a comprehensive exposition of related results see the book of Ross [3].

To present the most fundamental case, let us consider a network of 𝐽 links. A general link will be denoted by 𝑗, that is, we index the edges of the network graph here, rather than the nodes. Link 𝑗 has capacity 𝐶𝑗. Let us assume that a set 𝑅 of fixed routes is given in the network. A route 𝑟𝑅, in general, can be an arbitrary subset of the link set. Here we do not need the assumption that it is a path in the graph theoretic sense. Of course, the practically most important case is when it is actually a path. There may be several routes between the same pair of nodes, even on the same sequence of links. The offered traffic 𝑉𝑟 (the demand) to a given route 𝑟𝑅 arrives as a Poisson stream and the streams belonging to different routes are assumed to be independent.

The incidence of links and routes is given by a matrix =[𝐴𝑗𝑟],𝑗=1,,𝐽,𝑟𝑅. If link 𝑗 is on route 𝑟, then 𝐴𝑗𝑟=1, otherwise 𝐴𝑗𝑟=0. The call holding times are independent random variables, and the holding periods of calls on the same route are identically distributed with finite mean. However, this distribution can otherwise be arbitrary. The central approximation assumption of the model is that the blocking of different links are probabilistically independent events. Let us denote the blocking probability of link 𝑗 by 𝐵𝑗. The reduced load approximation says that the Poisson stream is thinned by a factor of (1𝐵𝑗) on each traversed link independently. Hence the carried traffic on route 𝑟 can be expressed as 𝑉𝑟𝐽𝑗=11𝐵𝑗𝐴𝑗𝑟.(1)

Note that the factor (1𝐵𝑗)𝐴𝑗𝑟 is 1 if the line is not traversed by the route (because then 𝐴𝑗𝑟=0); this is why the product can be taken for all links without taking care of which links are traversed by the route.

The carried traffic on link 𝑗 is obtained if we sum up the carried traffic of all routes that traverse the link: 𝑟𝑅𝐴𝑗𝑟𝑉𝑟𝑖1𝐵𝑖𝐴𝑖𝑟.(2)

Again, the summation is simply extended for all routes since 𝐴𝑗𝑟=0 holds for those that do not contain link 𝑗, making their contribution disappear from the sum.

If the total offered load to link 𝑗 is denoted by 𝜌𝑗, then (2) should be equal to 𝜌𝑗(1𝐵𝑗), since the latter is the carried traffic on link 𝑗, obtained by thinning the offered load by the factor 1𝐵𝑗. Thus, we can write the equation 𝜌𝑗1𝐵𝑗=𝑟𝐴𝑗𝑟𝑉𝑟𝑖1𝐵𝑖𝐴𝑖𝑟,(3) or, after canceling the factor (1𝐵𝑗)𝜌𝑗=𝑟𝐴𝑗𝑟𝑉𝑟𝑖𝑗1𝐵𝑖𝐴𝑖𝑟.(4) Further equations can be obtained by using that 𝐵𝑗 depends on 𝜌𝑗 and 𝐶𝑗 in this model via Erlang’s formula: 𝐵𝑗𝜌=𝐸𝑗,𝐶𝑗=𝜌𝑗𝐶𝑗/𝐶𝑗!𝐶𝑗𝑖=0𝜌𝑖𝑗./𝑖!(5) (Note: in the case 𝐶𝑗 is not an integer, we can use an analytic continuation of Erlang’s formula, see [12].)

Writing out (4) and (5) for all 𝑗=1,,𝐽, we obtain a system of 2𝐽 equations for the 2𝐽 unknown quantities 𝜌𝑗,𝐵𝑗,𝑗=1,,𝐽. We can observe that 𝐵𝑗 can be computed from (5) directly, once the values of the 𝜌𝑗  variables are known (the link capacities are given). Therefore, the core of the problem is to compute 𝜌𝑗,𝑗=1,...,𝐽. Eliminating 𝐵𝑗 from (4) by (5), we obtain a system of equations directly for the 𝜌𝑗 variables: 𝜌𝑗=𝑟𝐴𝑗𝑟𝑉𝑟𝑖𝑗𝜌1𝐸𝑗,𝐶𝑗𝐴𝑖𝑟.(6)

This system of equations (or, equivalently, the systems (4) and (5) together) is called reduced load approximation.

Alternatively, the equations are also called the Erlang fixed point equations.

The concept of fixed point comes into the picture in the following way. Let us use a vector notation 𝜌=[𝜌1,,𝜌𝐽] and define a function f: 𝑅𝐽+𝑅𝐽+ by 𝑓𝑗𝑓(𝜌)=1(𝜌),,𝑓𝐽,(𝜌)(7) where 𝑓𝑗(𝜌),𝑗=1,,𝐽 is given as 𝑓𝑗(𝜌)=𝑟𝐴𝑗𝑟𝑉𝑟𝑖𝑗𝜌1𝐸𝑗,𝐶𝑗𝐴𝑖𝑟.(8) Now the system (6) can be compactly formulated as 𝜌=𝑓(𝜌).(9) In other words, we have to find a fixed point of the mapping f: 𝑅𝐽+𝑅𝐽+.

There are some natural questions that arise here immediately. Does a solution (a fixed point) always exist? If one exists, is it unique? How can we find it algorithmically in an efficient way? The fundamental theorem characterizing this model was proven by Kelly [2] (see also [13]). We also outline its proof, since the proof contains some concepts that we are going to use later.

Theorem 1. The Erlang fixed-point equations always have a unique solution.

Proof. The existence of the solution follows from the fact that the function𝑓, defined above, is a continuous mapping of the closed 𝐽-dimensional unit cube [0,1]𝐽 into itself, therefore by the well-known Brouwer fixed point theorem it has a fixed point. (Brouwer’s fixed-point theorem says that any continuous function that maps a compact convex set into itself always has a fixed point.)
To show the uniqueness of the fixed point, we define an auxiliary function 𝑈(𝑦,𝐶) in a tricky way, by the implicit relation 𝑈(log(1𝐸(𝑣,𝐶)),𝐶)=𝑣(1𝐸(𝑣,𝐶)),(10) where 𝐸(𝑣,𝐶) is Erlang’s formula. The interpretation of 𝑈(𝑦,𝐶) is that it is the average number of circuits in use (the utilization) on a link of capacity 𝐶 when the blocking probability is 1𝑒𝑦. In other words, 𝑈(𝑦,𝐶) measures the link utilization as a function of a logarithmically scaled blocking probability 𝑦=log(1𝐵).
Define now an optimization problem, as follows: minimize𝑟𝑉𝑟exp𝑗𝑦𝑗𝐴𝑗𝑟+𝑗𝑦𝑗0𝑈𝑧,𝐶𝑗𝑑𝑧,subjectto𝑦𝑗0,𝑗=1,,𝐽.(11) The first sum in objective function is a strictly convex function. Since 𝑈(𝑦,𝐶) is strictly increasing, therefore, the integrals in the second sum are also strictly convex. Hence the above optimization problem, being the minimization of a strictly convex function over a convex domain, has a unique minimum. Consider now the stationary equations obtained by equating the derivative of the objective function with zero: 𝑟𝑉𝑟exp𝑗𝑦𝑗𝐴𝑗𝑟𝑦=𝑈𝑗,𝐶𝑗,𝑗=1,,𝐽.(12)
Using the definition of 𝑈(𝑦,𝐶) and applying the transformation 𝐵𝑗=1𝑒𝑦𝑗, we get back precisely the Erlang fixed-point equations from (12). Since we already know that there is a nonnegative solution to the fixed point equations, this implies that the stationary equations (12) also have a nonnegative solution, which is thus the minimum of the optimization problem (11). Conversely, each solution of (11) corresponds to a fixed point through the transformation 𝐵𝑗=1𝑒𝑦𝑗. Since by the strict convexity there is a unique minimum, therefore (12) cannot have another solution, which implies the uniqueness of the fixed point, thus completing the proof.

Having proved the existence and uniqueness of the fixed point, a natural question is how to find it algorithmically. The simplest algorithm is to do iterated substitution using the function defined in (7), (8). We can start with any value, say 𝜌(0)=[1,,1] and then iterate as 𝜌(𝑖+1)𝜌=𝑓(𝑖),(13) until 𝜌(𝑖+1) and 𝜌(𝑖) are sufficiently close to each other. This method works very well in practice, although convergence is not guaranteed theoretically, since 𝑓() is not a contraction mapping, that is, ||𝑓(𝑥)𝑓(𝑦)||<𝛼||𝑥𝑦|| does not necessarily hold for some constant 𝛼<1. In fact, there exist examples for nonconvergence, see Whitt [4].

Another algorithmic possibility is solving the convex programming problem (11). Although this is guaranteed to work, nevertheless, it offers a much more complicated algorithm, which is made even worse by the implicit definition of the function 𝑈(𝑦,𝐶). Therefore the practical algorithm is the iterated substitution, even though it is not guaranteed to converge in pathological cases.

The presented model is the base case, when routing is fixed and traffic is homogeneous. Various extensions exist for more complicated cases, see for example, [3, 14]. Unfortunately, they lack the nice feature of the unique fixed point. In the next section, extensions to heterogeneous traffic will be used for cases when the reduced load approximation is embedded into optimization models.

3. A Nonlinear Network Level Optimization Model

In this section we build a nonlinear network level optimization model based on the reduced load approximation. Recall that we considered the situation when logical (virtual) sub-networks exist on top of the given physical network. They are realized by logical links. A logical link is, in general, a subset of the physical links. It can be, for example, a route in the physical network. Our objective is to allocate capacity to the logical links such that the physical capacity constraints are obeyed on every physical link and the total carried network traffic is maximized. Note that since the logical links share physical capacities. Therefore if we want to decrease blocking on a logical link by giving more capacity, we can only do this by taking away capacity from others, thus degrading other logical links. It is intuitively clear that an optimization problem arises from this VPN design.

Since the model is built on the reduced load approximation (Section 2), therefore we use the same notation. The network contains 𝐽 logical links, labeled 1,2,,𝐽. The capacity of logical link 𝑗 is 𝐶𝑗. Since logical link capacities are not fixed in advance (we want to optimize with respect to them!), therefore the 𝐶𝑗  are variables. Let 𝐶=(𝐶1,𝐶2,,𝐶𝐽) be the vector of logical link capacities.

The condition that the sum of logical link capacities on the same physical link cannot exceed the physical capacity can be expressed by a linear system of inequalities. Let 𝐶phys be the vector of given physical link capacities. Furthermore, let 𝑆 be a matrix in which the 𝑗th entry in the 𝑖th row is 1 if logical link 𝑗 needs capacity on the 𝑖th physical link, otherwise 0. Then the physical constraints can be expressed compactly as 𝑆𝐶𝐶phys.

A set 𝑅 of fixed routes is given in the network. A route is a sequence of logical links. There may be several routes between the same pair of nodes, even on the same sequence of logical links. The offered traffic (the demand) to a given route 𝑟𝑅 is 𝑉𝑟 and is assumed to arrive as a Poisson stream. The streams belonging to different routes are assumed independent. Holding times are independent of each other and holding periods of sessions on the same route are identically distributed.

We consider heterogeneous (multirate) traffic. To preserve the nice properties of the Erlang fixed-point equations, we adopt the following homogenization approach: a session (call) that requires 𝑏 units of bandwidth is approximated by 𝑏 independent unit bandwidth calls.

The capacity (bandwidth) that a session on route 𝑟 requires is denoted by 𝑏𝑗𝑟  on link 𝑗 (for the sake of generality, we allow that it may be different on different links). If the route does not traverse link 𝑗, then 𝑏𝑗𝑟=0. Note that 𝑏𝑗𝑟 plays the same role here as 𝐴𝑗𝑟  in the description of the reduced load approximation, but 𝑏𝑗𝑟 can now also take values other than 0 and 1.

According to the applied approximations, the total carried traffic in the network is expressed as 𝑟𝑉𝑟𝑗1𝐵𝑗𝑏𝑗𝑟,(14) where 𝐵𝑗 is the (yet unknown) blocking probability of logical link 𝑗. (𝑗=1,2,,𝐽).

Our objective is to find the vector 𝐶 of logical link capacities, subject to the physical constraints 𝑆𝐶𝐶phys and 𝐶0, such that the total carried traffic is maximized: maximize𝑟𝑉𝑟𝑗1𝐵𝑗𝑏𝑗𝑟,subjectto𝑆𝐶𝐶phys,𝐶0,(15) where 𝐶𝑗 are variables and the dependence of 𝐵𝑗 on 𝐶𝑗 is defined by the Erlang fixed-point equations: 𝐵𝑗=𝐸1𝐵𝑗1𝑟𝑅𝑏𝑗𝑟𝑉𝑟𝑖𝑗1𝐵𝑖𝑏𝑖𝑟,𝐶𝑗,(16) where 𝑗=0,1,,𝐽.

4. Heuristic Methods to Solve Network Design Problems

This section presents heuristic methods to solve optimization problems, including network design tasks. Here we only focus on combinatorial optimization algorithms for problems of the following form: minimize𝐹(𝑥)for𝑥𝑆,(17) where 𝑆 is a very large finite set of feasible points (solution space, optimization space). The variable 𝑥 can take different forms: it can be a binary string, an integer vector, or mixed integer and label combination.

4.1. Well-Known Heuristic Methods

One of the well-known general stochastic methods is simulated annealing (SA) [7, 9, 15]. As a local search method, SA also uses the notion of neighborhood, but applies randomness to choosing the next step to avoid getting trapped in local optima. We can refer [5, 7, 9] for SA pseudocode.

Another well-known heuristic method is genetic algorithm (GA). The genetic algorithm (GA) is also often applied in combinatorial optimization problems [5, 6, 8, 9]. The procedure first selects parent solutions for generating offspring. Then it performs two basic procedures: crossover (𝑥) with probability 𝑃𝑐 and mutation (𝑥) with probability 𝑃𝑚. We can refer [5, 6, 8, 9] for GA pseudocode. Genetic algorithm does not use local search. It may jump away from the best point even when it is very close to it.

4.2. Description of Landscape Smoothing Search (LSS)

Our proposed LSS is a general optimization technique suitable for a wide range of combinatorial optimization problems. The authors of this paper proposed the original LSS to solve call admission control optimization problem for cognitive radio network in [5]. To better understand the idea of LSS we give an introduction to its basic form here. LSS can get out of local optima, and effectively conduct local search in the vast search spaces of hard optimization problems. The key idea is that we continuously change the objective/fitness function of the problem to be minimized with a set of smoothing functions that are dynamically manipulated during the search process to steer the heuristics to get out of the local optima.

The objective function 𝐹(𝑋) is extended with a smoothingfunction 𝑆(𝑋) as follows: 𝐹(𝑋)=𝐹(𝑋)+𝑆(𝑋)=𝐹(𝑋)+𝜆𝑑(𝑋),(18) where 𝑆(𝑋) is the smoothing function: 𝑆(𝑋)=𝜆𝑑(𝑋),(19) which contains “landscape smoothing functions.” We define them as 𝑑(𝑋)=0, if we hit this local optimal point 𝑋 for the first time. 𝑑(𝑋)=𝑑(𝑋)+1,if we hit this local optimal point 𝑋 for the second or more time. 𝜆 is the smoothing step constant and it may be adaptively changed.

We also record the best point reached so far, and the local optima during the search. Every time we hit a local optimum, we compare it with the global optimum (best point so far). Then we adjust the landscape smoothing factors to let the local search get away from the local trap.

As we keep changing the objective function, we gradually “smooth out” the landscape to get rid of the local holes that trap the search. We never fill a hole, however, before the second time we reach it. This way the search trace never misses a hole that could be the global optimum. See Figure 1.

We assume 𝐹 is the objective function; 𝜆 is a group of constants that serve as a landscape smoothing step factors, which can be constants or can also be adaptively changed as 𝜆. 𝐺 represents the problem specification (e.g. network topology, demand matrix or some constraints). We give the pseudo code here for landscape smoothing search (LSS) in Algorithm 1.

Procedure   algorithm LSS  (F,λ,X,G)
begin
create initial feasible solution 𝑋 0
set 𝑑 (X)= 0, (h = 1,,H)
(set all initial smoothing factors to zero)
for  i:   =  1   to max-iterations
begin
   𝐹 (X) = F(X) + [ 𝜆 𝑑 ( 𝑋 ) ]
   𝑋 𝑖 = Local search ( 𝐹 , 𝜆 , 𝑋 𝑖 1 , G)
  for each h:   =  1 to H
  begin
   adjust 𝑑 (X);
      ( 𝑋 is the local optimum point)
   if  F( 𝑋 𝑖 ) < 𝐹 b e s t   then
     𝐹 b e s t :  =  F( 𝑋 𝑖 );
     𝑋 b e s t :   =   𝑋 𝑖 ;
   end
   adjust 𝜆   for adaptive λ;
  end
end procedure
LSS  returns 𝑋 b e s t where F( 𝑋 b e s t ) is the minimum of all solutions so far.

5. Initial Results with Original Landscape Smoothing Search (LSS)

In this section we demonstrate a sample non-linear hard VPN design problem and the optimization results achieved with LSS and simulated annealing (SA).

5.1. A Sample VPN

We use a VPN with 40 virtual links and 12 routes shown as in Figure 2 for simulation. For example, 𝑒2=𝑐0+𝑐8  means physical link 𝑒2 consists of two virtual links 𝑐0  and 𝑐8, 𝑒9=𝑐12+𝑐27+𝑐35 means physical link 𝑒9 consists of three virtual links 𝑐12,𝑐27, and 𝑐35, dashed route  𝑟2=𝑐8+𝑐9+𝑐10 means route 𝑟2 consists of three virtual links 𝑐8, 𝑐9, and 𝑐10, dotted route 𝑟5=𝑐18+𝑐19+𝑐20+𝑐21.

5.2. Initial Results with LSS

We use the sample VPN described in Section 5.1. The diagram in Figure 3 shows initial results with landscape smoothing search (LSS) and simulated annealing (SA).

Our optimization results were obtained through a PC with AMD Sempron 2500+ CPU. The searching time is the time needed to obtain best solution so far. The searching time unit is 10 seconds. For each heuristic method, these measures were recorded each time there was an improvement in the best carried traffic value. So in each point of the search curve, the X-axis value is the searching time in seconds and the Y-axis value is the best carried traffic value we achieve so far. We find that LSS gives better result in long time, but the search speed is slower than SA. When the objective function takes very long time to calculate we find that the original landscape smoothing searching (LLS) heuristic method is slower than simulated annealing (SA). As we know SA makes a possible move with a random neighbor to find a better objective value. The original LSS makes a possible move by picking the best neighbor of all direct neighbors. If for a case which has 𝑁 (𝑁=40) direct neighbors it needs to do 𝑁 (𝑁=40) objective function calculations to make a possible move. The objective function calculation of VPN design itself is a complex iterative process using (16). It may take 50 to 90 iterations to calculate an accurate value. So the objective function calculation of VPN design is a time consuming process. This explains why the original LSS method is slower than SA here for VPN design case.

6. Fast Landscape Smoothing Search (FLSS)

To improve the search speed of LSS, we modify the original LSS into a fast landscape smoothing search (FLSS). The FLSS has two (or more) phases of search. The first phase we call rough search phase. In the rough search phase we divide the set of 𝑁 neighbors into several subsets, like 𝑁1, 𝑁2, 𝑁3, and 𝑁4. We apply the LSS method on subneighborhoods one at a time, and also we apply LSS on subneighbor one by one. This way we make every LSS move with much fewer objective function calculations. The second phase we call fine search phase. In the fine search phase we use the original LSS method on the whole neighborhood, as in the original LSS. This way we will not miss the possible global optima in the process we search all 𝑁 direct neighbor. The improved result is shown in Figure 4.

7. More Numerical and Optimization Results

This section presents more numerical and optimization results of carried traffic load for the VPN design problem.

7.1. Optimization Results with Simulated Annealing (SA)

Figure 5 shows simulated annealing (SA) optimization search traces for function (15) with 3 different T-reduce values (cooling speed factor). We can see SA convergence speed may be affected by setting the temperature reduce cooling value. At the beginning of the SA search procedure, SA converges very fast. Then SA convergence becomes very slow. After a while it takes a long time for SA to find the next better value.

Figure 6 shows simulated annealing (SA) optimization search trace with different initial settings. In all these figures, legend initial settings like [Initial-2 = 4 3 2 1 4 3 2 1] means virtual link capacities 𝐶𝑗[0],𝐶𝑗[1],𝐶𝑗[2],𝐶𝑗[3],𝐶𝑗[4],𝐶𝑗[5],𝐶𝑗[6],𝐶𝑗[7],=43214321,. From Figure 6 we can see the SA optima value depends on the initial settings. Different initial settings will lead to a slightly different SA optima value.

7.2. Optimization Results with Genetic Algorithm (GA)

Figure 7 shows genetic algorithm (GA) optimization search traces for function (16) with 3 sets of different 𝑃𝑐/𝑃𝑚 values. 𝑃𝑐 is the probability for crossover. 𝑃𝑚 is the probability for mutation. From Figure 7 we can see GA convergence speed may be affected by settings of crossover/mutation probability values. At the beginning of the GA search procedure, GA converges very fast. Then GA converges very slowly. After a while it takes a long time for GA to find the next better value.

Figure 8 shows Genetic Algorithm (GA) optimization search trace with different initial settings. From Figure 8 we can see the GA near optima value depends on the initial settings. Here the notation like [Initial-2 = 1 2 3 4 1 2 3 4] means virtual link capacities 𝐶𝑗[0],𝐶𝑗[1],𝐶𝑗[2],𝐶𝑗[3],𝐶𝑗[4],𝐶𝑗[5],𝐶𝑗[6],𝐶𝑗[7],=12341234,. Different initial settings will lead to slightly different GA optima values. This kind of search trend is similar to that which we see in SA.

7.3. Optimization Results with Fast Landscape Smoothing Search (FLSS)

Figure 9 shows fast landscape smoothing search (FLSS) optimization search trace with different smoothing step factor 𝜆 values. From Figure 9 we can see LSS convergence speed may slightly affected by setting of smoothing step factor 𝜆 value. FLSS converges relatively fast and stabilizes smoothly.

Figure 10 shows Fast Landscape Smoothing Search (FLSS) optimization search trace with different initial settings. Here the notation [Initial-3 = 1 2 3 4 1 2 3 4] means virtual link capacities 𝐶𝑗[0],𝐶𝑗[1],𝐶𝑗[2],𝐶𝑗[3],𝐶𝑗[4],𝐶𝑗[5],𝐶𝑗[6],𝐶𝑗[7],=12341234,. From Figure 10 we can see the FLSS optima value depends on the initial settings. Different initial settings will lead to slightly different optima values.

These results show that FLSS is much less sensitive to the adjustable parameters than SA and GA. And FLSS is also much less sensitive to the initial settings of search starting point. With different initial settings, FLSS converges to three near optima values that are very close. The difference of the three values is almost negligible.

7.4. Three Optimization Methods Comparison

To compare the three optimization methods, we put fast landscape smoothing search (FLSS), simulated annealing (SA), and genetic algorithm (GA) optimization search traces together in Figure 11.

Here the three methods all use the same initial settings of [33333333]. FLSS uses smoothing step factor 𝜆 of 2.90. SA uses cooling T-reduce factor of 0.7. GA uses crossover/mutation probabilities 𝑃𝑐/𝑃𝑚 of 0.9/0.9. We can see that FLSS starts slower than SA and GA at the beginning of the search, but FLSS catches up later and achieves a better objective function value.

We can see that SA and GA are not likely to get better results after time unit of 250. FLSS keeps getting better results after time unit of 250 and even 500. In this VPN case, FLSS slightly over performs SA and GA in the long run. So FLSS is a very good candidate for heuristic methods.

The results of these heuristic methods are case dependent. Based on the three network design cases we used, we compare these heuristic method features in Table 1 of the following page.

LSS and FLSS use adaptive smoothing function and they reach the optima by finding the best neighbor. SA jumps randomly with reduced cooling probabilities and reaches the optimum by chance. GA uses crossover and mutation with 𝑃𝑐/𝑃𝑚 probabilities and reaches the optimum by chance. SA and GA might jump away from the global optimum even when they are very close to it.

8. Conclusion

In this paper, we model the VPN traffic loss caused by blocking not only on isolated links, but at the network level. We take into account that the loss suffered on a link reduces the offered traffic of other links and vice versa. We formulate the VPN design problem as the optimization problem of maximizing the carried traffic in the VPN. So that the VPN optimization becomes a hard optimization problem. We used a heuristic method called landscape smoothing search (LSS) and applied it to this problem. We find that LSS can get better result than SA but with a slower search speed. The reason is that in this VPN case, the objective function calculation of VPN carried traffic is a very time consuming process. To improve the speed of the original LSS we proposed a new fast landscape smoothing (FLSS) method.

The slow search speed drawback of the original LSS is overcome in the FLSS. Our FLSS method is also compared with popular heuristic methods such as simulated annealing (SA) and genetic algorithm (GA). We find the FLSS technique to be simple to implement. The three techniques were tested in many experiments with different VPN initial settings and different adjustable parameters to compare the optimization performance. The results show that the FLSS is much less sensitive to the adjustable parameters than SA and GA are. The FLSS is also less sensitive to the initial settings of search starting point than SA and GA are. With different initial settings, the FLSS converges to almost the same near optima value each time. The overall results show that FLSS outperforms the SA and the GA techniques both in terms of solution quality and optimization speed. Therefore based on these results, the fast landscape smoothing search (FLSS) technique can be strong candidate in solving hard optimization problems in network design.

Acknowledgment

The authors are grateful for the support of NSF Grant CNS-1018760.