Abstract

Developing effective, fairness-preserving optimization algorithms is of considerable importance in systems which serve many users. In this paper we show the results of the threshold accepting procedure applied to extremely difficult problem of fair resource allocation in wireless mesh networks (WMN). The fairness is modeled by allowing preferences with regard to distribution of Internet traffic between network participants. As aggregation operator we utilize weighted ordered weighted averaging (WOWA). In the underlaying optimization problem, the physical medium properties cause strong interference among simultaneously operating node devices, leading to nonlinearities in the mixed-integer pricing subproblem. That is where the threshold accepting procedure is applied. We show that, the threshold accepting heuristic performs much better than the widely utilized simulated annealing algorithm.

1. Introduction

Wireless mesh network (WMN) is an organized cooperating group of network devices communicating with each other by means of wireless media. The nodes are organized in a mesh topology, where each wireless device not only sends and receives its own data but also serves as a relay for other nodes. Some of the nodes can be connected to cable network or mobile network and serve as Internet gateways. This way the whole mesh network constitutes a decentralized way of providing Internet access between attending participants.

This network type poses numerous advantages including setup cost, independence of the hardwired infrastructure, and flexibility. However, providing fair and efficient network management, including routing and scheduling, is not a straightforward task. The main source of difficulty lies in physical medium properties that cause strong interference among simultaneously operating devices. Additionally the link quality is a function of the distance and can be affected by obstacles present between the nodes. As a result the efficient network operation requires transmission scheduling, channel assignment, and transmission power determination.

Common objective of the optimization is maximization of the total throughput while retaining fairness in its distribution between participants.

In network optimization problems fairness is often accomplished using the lexicographic max-min (LMM) optimization. In the case of convex attainable set, this corresponds to the max-min fairness concept [1] which states that in the optimal solution it is impossible to increase any of the outcomes without the decreasing of some smaller (worse) ones [13]. In nonconvex case such strictly defined MMF solution may not exist while the LMM always exists and it covers the former if it exists (see [4] for wider discussion). However, LMM is a stiff approach that usually does not allow any other criteria, the overall efficiency (total throughput) in particular. Moreover, it requires sequential repeated optimization of the original problem. A recent survey on fairness oriented WMN optimization can be found in [5].

In the paper a more flexible approach based on the weighted ordered weighted averaging (WOWA) outcomes aggregation is proposed. It provides the consistent, reasonable, and fairness-preserving methodology of modeling various preferences (from the extreme pessimistic through neutral to extreme optimistic) with regard to distribution of Internet throughput between network participants. It is based on the OWA (ordered weighted aggregation) [6, 7] in which the preference weights are assigned to the ordered values (i.e., to the largest value, the second largest, and so on) rather than to the specific criteria. The WOWA extension allows using additional weights (called importance weights) assigned to the specific outcomes.

The OWA operator provides a parameterized family of aggregation operators, which include many of the well-known operators such as the maximum, the minimum, the -order statistics, conditional minimax [8] known as conditional value at risk (CVaR) in the field of financial risk measurement, the median, and the arithmetic mean. The OWA satisfies the properties of strict monotonicity, impartiality, and, in the case of monotonic increasing weights, the property of equitability (satisfies the principle of transfers: equitable transfer of an arbitrary small amount from the larger outcome to a smaller outcome results in a more preferred achievement vector). Thus the OWA-based optimization generates the so-called equitably efficient solutions (cf. [9] for the formal axiomatic definition). According to [9, 10], equitable efficiency expresses the concept of fairness, in which all system entities have to be treated equally and in the stochastic problems equitability corresponds to the risk aversion [11]. Since its introduction, the OWA aggregation has been successfully applied to many fields of decision making [7, 12, 13]. When applying the OWA aggregation to multicriteria optimization, the weighting of the ordered outcome values causes the fact that the OWA optimization problem is nonlinear even for linear programming formulation of the original constraints and criteria. Yager has shown that the nature of the nonlinearity introduced by the ordering operation allows one to convert the OWA optimization into a mixed integer programming problem. It was shown [14] that the OWA optimization with monotonic weights can be formed as a standard linear program of higher dimension. Its significant extension introduced by Torra [15] incorporates importance weighting into the OWA operator forming the weighted OWA (WOWA) aggregation as a particular case of Choquet integral using a distorted probability as the measure. The WOWA averaging is defined by two weighting vectors: the preferential weights and the importance weights. It covers both the weighted means (with equal preferential weights) and the OWA averages (with equal importance weights) as special cases. Some of the example applications of importance weights (applied to specific outcomes) include definition of the size or importance of processes in a multiagent environment, setting scenario probability (if uniform objectives represent various possible values of the same uncertain outcome under several scenarios), or job priorities in scheduling problems. It was shown [16] that in the case of monotonic preferential weights WOWA aggregation can also be modeled by a mere linear extension of the original problem.

This paper extends and refines our initial work on the subject presented in [17]. The list- based threshold accepting heuristic applied to the pricing problem is described. The results are compared to the exact MIP formulation.

This paper is organized as follows. In the next section, we present the wireless mesh network optimization problem together with the outline of the solution approach. Next the WOWA aggregation operator is introduced. In the fourth section we deal with the pricing problem and show two alternative approximate solution algorithms. The last section presents the setup and the results of the computational experiments.

2. Flow Optimization in WMN

The WMN networking technology has been drawing an increased attention over the last years (see literature overview in [18, 19] and references therein). Due to complexity of the problem usually some sorts of simplifications are assumed. The problem considered in this paper can be stated as follows. There is given a WMN network with a number of nodes, routers and gateways. The nodes are interconnected wirelessly in compliance with all the physical constraints and requirements, including signal loss with increasing distance and interference occurring during simultaneous operation. Each node can be either sending or receiving data, but not both at the same time. There are a number of modulation and coding schemes (MCSs) used for communication between the nodes with different properties with regard to speed, maximum allowable interference, and the distance. Each MCS has its signal to interference plus noise ratio (SINR) requirement that must be fulfilled in order to successfully transmit data. Only one fixed transmitting power and single channel are assumed, but MCS can be dynamically allocated. The network model consists only of links for which at least one MCS can be applied, and this requirement reduces to the maximum allowable distance between the nodes.

Only downstream communication direction from gateways to routers is considered. For each router there is a single predefined path leading to a chosen gateway. The routers have elastic traffic demand, which means they can consume all the possible network capacity. The demands compete for network resources to get as much throughput as possible.

The objective is to maximize total throughput preserving fairness among competing demands.

The solution approach is based on the concept of compatible sets introduced in [20]. Compatible set consists of links that can operate at the same time within given interference model. The basic solution concept consists in linear approximation of the model and consecutive generation of the compatible sets improving current solution within the column generation schema. The approximation is needed if the time horizon is divided into fixed-length time slots; if not the solution is optimal.

Although we consider only a specific problem, the solution concepts involving application of WOWA operators can be utilized for many other variants of WMN problems including different capacity reservation models (see [19]).

2.1. Notation

Wireless mesh network topology is represented by a directed graph , where is the set of nodes from which we distinguish the set of gateways and set of mesh routers denoted, respectively, by and , and is the set of (radio) links.

The (potential) link between two nodes is modeled by a directed arc , where is the originating node that can transmit a signal of a given power to its terminating node . Additionally, we assume that if arc exists, then an opposite arc also exists. Furthermore, the sets of outgoing and incoming arcs from/to node are denoted, respectively, by and , while is the set of all arcs incident to node .

Nodes are transmitting using one of the available modulation and coding schemes (MCSs) denoted by , where is the set of all MCSs (to simplify the considerations, we assume that the set of available MCSs is , ). The (raw) data rate of transmission using MCS is denoted by .

The (radio) link can successfully transmit if the signal to noise ratio (SNR) for the arc is greater than a certain threshold value denoted by for at least one MCS : where mW is the ambient noise power.

At any arbitrary time instance the transmission of other nodes can interfere with transmission on . The corresponding signal to interference plus noise ratio (SINR) condition for successful transmission on using MCS reads as follows: where is the set of active nodes which are transmitting at the same time.

Moreover, we assume that a node can either transmit or receive or be inactive; that is,

Each router is connected with a selected gateway by a directed path (i.e., a subset of links, ) that is supposed to carry the entire downstream flow from gateway to router (to simplify the formulations, we do not consider the upstream direction). The set of routers is considered as demands and denoted by , where . Let be the given set of paths between routers and gateways, where . For each link , the set of all indices of paths in that contain this link will be denoted by .

2.2. Compatible Sets

A compatible set (CS) is defined as a subset of links together with a particular MCS that each link is using so that each link can be active simultaneously (i.e., transmit without generating too much interfering with other links). In other words, a compatible set is defined by , where variables form a feasible solution that satisfy (2) and (3).

2.2.1. Master Problem

Using the family of compatible sets denoted by , the formulation of the max-min flow optimization problem reads as follows:

In the presented formulation, is the time of network operation, is the (raw) data rate of a transmission using MCS allocated to link in compatible set , that is, either or 0, depending on whether link is active or not in the compatible set , and is total amount of data that can be transmitted over link in a time interval .

This formulation is a noncompact, continuous approximation of the MIP problem involving time slots (see [19]); continuous variables define the number of time slots assigned to a compatible set within the time .

Since grows exponentially in the network size, the solution is to use the column generation technique [3, 21], where not all the columns of the constraints matrix are stored. Instead, only a subset of the variables (columns) that can be seen as an approximation (restriction) of the original problem is kept. The column generation algorithm iteratively modifies the subset of variables by introducing new variables in a way that improves the current optimal solution. At the end, the set contains all the variables necessary to construct the overall optimal solution which can use all possible columns. New columns are generated in the pricing problem.

2.2.2. Pricing Problem

The pricing problem we consider corresponds to a WMN system in which there are multiple MCSs available and each node can use different MCS in different compatible set. The following formulation is referred to as dynamic allocation of MCSs to nodes:

In this formulation, , are the current optimal dual variables associated with constraints (6). At each iteration we are interested in generating CS for which the reduced price has the biggest and positive value, as we can expect this will improve the current optimal solution as much as possible. The is a binary variable indicating whether node transmits using MCS , is a binary variable indicating whether node transmits, is a binary variable indicating whether link is scheduled to be active with the MCS , is a binary variable indicating whether link is active, and is the (raw) data rate of a transmission allocated to link . Notice that, in this formulation, and are auxiliary variables and thus either they can be eliminated or their binarity can be skipped. Moreover, observe that applying (2) directly to our model would result in a bilinear constraint. Hence, we have introduced additional (continuous) variables to make the constraint (17) linear. This is achieved by adding the constraints (15)-(16); that is, if both and are equal to 1, and 0, otherwise.

Each node and each link can use at most one MCS in the compatible set (11)-(12). At most one link incident to node can be active (13) and exactly one link outgoing from node is active and uses MCS (14), provided the node is active and uses this MCS in the compatible set. The constraints (15)–(17) assure admissible SINR for link using MCS in the compatible set. The (raw) data rate of link in the compatible set is found by (18).

3. Fair Aggregation Operators

As stated before the basic operator used to preserve fairness among outcomes is lexicographic max-min (LMM) which is equivalent to MMF for the linear problems. In such a case it is possible to carry out the MMF procedure based on simple algorithm that in each step uses the dual information to determine the outcomes that are blocked at their highest values possible. In the following steps, only the outcomes are optimized that have not been blocked before (for details see [19]).

On the other hand, in the WOWA aggregation, the original problem is extended by auxiliary constraints and solved in a single step. Let us first introduce the formalization of the OWA operator. In the OWA aggregation of outcomes preferential weights are assigned to the ordered values rather than to the specific criteria: where is the ordering map with and there exists a permutation of set such that for .

If the weights are monotonic, , the OWA aggregation has the property of equitability [14], which guarantees that an equitable transfer of an arbitrarily small amount from the larger outcome to a smaller outcome results in more preferred achievement vector. Every solution maximizing the OWA function is then an equitably efficient solution to the original multiple criteria problem. Moreover, for linear multiple criteria problems every equitably efficient solution can be found as an optimal solution to the OWA aggregation with appropriate weights.

For the maximization problem the OWA objective aggregation can be formulated as linear extension of the original problem, as follows. Let us apply linear cumulative map to the ordered achievement vectors : As stated in [14], for any given vector , the cumulated ordered coefficient can be found as the optimal value of the following LP problem: The ordered outcomes can be expressed as differences for and . Hence, the maximization of the OWA operator (20) with weights can be expressed in the form where coefficients are defined as and for and is the feasible set of outcome vectors . If the original weights are strictly decreasing, then for .

For the WMN flow optimization problem (4)–(9) the final OWA aggregation of the outcomes for all demands/routers can be stated as the following LP model: subject to where and is a feasible set of flows/throughputs defined by (6)–(9).

The OWA aggregation (20) allows modelling various aggregation functions from the minimum ( for ), through the arithmetic mean ( for ), to the maximum , for ). However, it is not possible to express the weighted mean. Due to the property of impartiality and neutrality with respect to the individual attributes the OWA aggregation does not allow representing any importance weights allocated to the specific attributes.

The WOWA aggregation is a generalization of the OWA that allows assigning importance weights to specific criteria [22]. In the case of WMN, the importance weights could express the number of end-users hidden behind each router. For example, the importance weight of the router with 5 directly connected users should be 5 times greater than the importance weight of the router with only a single directly connected user.

Let be an -dimensional vector of importance weights such that for and . The corresponding weighted OWA aggregation of vector is defined [15] as follows: with where is increasing function interpolating points together with the point and representing the ordering permutation for (i.e., ). Moreover, function is required to be a straight line when the points can be interpolated in this way. Due to this requirement, the WOWA aggregation covers the standard weighted mean with weights as a special case of equal preference weights ( for ). Actually, the WOWA operator is a particular case of the Choquet integral using a distorted probability as the measure [23].

Note that such WOWA definition allows us for a clear interpretation of importance weights as the agent (demand) repetitions [24]. Splitting an agent into two agents does not cause any change of the final distribution of outcomes. For theoretical considerations one may assume that the problem can be transformed (disaggregated) to the unweighted one (that means all the agent importance weights are equal to 1); see [22, 25] and examples therein. Thus, the WOWA aggregation with increasing preferential weights is equitable since equally important unit of a smaller outcome is considered with a larger weight.

As shown in [22], maximization of an equitable WOWA aggregation with decreasing preferential weights may also be implemented as the LP expansion of the original problem. In the case of the WMN flow optimization problem (6)–(9), this can be stated as follows: subject to If the importance weights are equal to , the model reduces to the OWA aggregation.

A special case of the generalized WOWA aggregation is defined for single breakpoint and corresponds to optimization of the predefined quantile of the worst outcomes and in finance is known as the CVaR (conditional value at risk). It can be computed as a standard linear extension of the original problem [22]: subject to

4. Algorithms

4.1. List-Based Threshold Accepting

List-based threshold accepting algorithm (LBTA) is an extension of threshold accepting metaheuristic, which belongs to the randomized search class of algorithms. This rather unknown heuristic has been successfully applied to many difficult problems [2630]. Since the problem of fair resource allocation in wireless mesh networks is extremely challenging, we have decided to try this underappreciated algorithm.

The search trajectory of LBTA crosses the solution space by moving from one solution to a random neighbor of that solution and so on. Unlike the greedy local search methods which consist of choosing a better solution from the neighborhood of the current solution until such can be found (hill climbing), the threshold accepting allows choosing a worse candidate solution based on a threshold value. In the general concept of the threshold accepting algorithm it is assumed that a set of decreasing threshold values is given before the computation or an initial threshold value and a decrease schedule are specified. The rate at which the values decrease controls the trade-off between diversification (associated with large threshold values) and intensification (small threshold values) of the search. It is immensely difficult to predict how the algorithm will behave when a certain decrease rate is applied for a given problem without running the actual computation. It is also very common that the algorithm with the same parameters works better for some problem instances and significantly worse for others. These reflections led to the list-based threshold accepting branch of threshold accepting metaheuristic.

In the list-based threshold accepting approach, instead of a predefined set of values, a list is dynamically created during a presolve phase of the algorithm. The list, which in a way contains knowledge about the search space of the underlying problem, is then used to solve it.

4.1.1. Creating the List of Threshold Values

The first phase of the algorithm consists of gathering information about the search space of the problem that is to be solved. From an initial solution a neighbor solution is created using a move function (perturbation operator) chosen at random from a predefined set of functions. If the candidate solution is better than the current one, it is accepted and becomes the current solution. Otherwise, a threshold value is calculated as a relative change between the two solutions, and added to the list, where is the objective function value of the solution and is a set of all feasible solutions. For this formula to work, it is silently assumed that . This procedure is repeated until the specified size of the list is reached. For the algorithm overview see Algorithm 1.

Require: Initial solution , list size , set of move operators
(1)   
(2)  while     do
(3)    random
(4)   
(5)   if     then
(6)    
(7)    
(8)    
(9)   else
(10)   
(11)  end if
(12) end while
(13) return  list

4.1.2. Optimization Procedure

The second phase of the algorithm is the main optimization routine, in which a solution to the problem is found. The algorithm itself is very similar to that of the previous phase. We start from an initial solution, create new solution from the neighborhood of current one using one of the move functions, and compare both solutions. If the candidate solution is better, it becomes the current one. Otherwise, a relative change is calculated. To this point algorithms in both phases are identical. The difference in the optimization procedure is that we compare the threshold value with the largest value from the list. If the new threshold value is larger, then the new solution is discarded. Otherwise, the new threshold value replaces the value from the list, and the candidate solution is accepted for the next iteration. The best solution found during the optimization process is considered final.

The list-based threshold accepting algorithm also incorporates early termination mechanism: after a (specified) number of candidate solutions are subsequently discarded, the optimization is stopped and the best solution found so far is returned.

The optimization procedure of the list-based threshold accepting algorithm is shown in Algorithm 2.

Require: Initial solution , thresholds list , set of move operators
(1)    
(2)   
(3)   while     do
(4)    
(5)    
(6)    
(7)    if     then
(8)     if     then
(9)      
(10)    end if
(11)    
(12)    
(13)   else
(14)    
(15)    if     then
(16)     
(17)     
(18)    
(19)     
(20)   end if
(21)  end if
(22) end while
(23) return  

4.2. Simulated Annealing

Simulated annealing (SA) was first introduced by Kirkpatrick et al. [31], while Černý [32] pointed out the analogy between the annealing process of solids and solving combinatorial problems. Applications of the SA algorithm to optimization problems in various fields have been studied [3336] and to the WMN problem, as well [17].

The optimization process of the simulated annealing algorithm can be described in the following steps. At the start, an initial solution is required. Then, repeatedly, a candidate solution is randomly chosen from the neighborhood of the current solution. If the candidate solution is the same or better than the current one, it is accepted and replaces the current solution. Even if the generated solution is worse than the current one, it still has a chance to be accepted with the so-called acceptance probability. This probability is a function of difference between objective value of the current and the candidate solution and depends on a control parameter taken from the thermodynamics, called temperature (). After a number of iterations, the temperature is decreased by a reduce factor (), and the process continues as described above. The optimization is stopped either after a maximum number of iterations or when a minimum temperature is reached. The best solution found during the annealing process is considered final.

For the algorithm overview see Algorithm 3, and for the overview of the SA parameters see Table 1.

Require: Initial solution
(1)   
(2)  for     to     do
(3)   for     to     do
(4)    
(5)    
(6)    if   or   then
(7)     
(8)    end if
(9)    if     then
(10)    
(11)   end if
(12)  end for
(13)  
(14) end for
(15) return  

4.3. Neighborhood Function

The most problem-specific mechanism of both the SA and the LBTA algorithm that always needs a different approach and implementation is the procedure of generating a candidate solution from the neighborhood of the current one, which is called a perturbation scheme, transition operation/operator, or a move function. Although there are many ways to accomplish this task, we have examined the following operators.(1)Deactivate a node at random.(2)Activate a random node and select the MCS with the smallest raw rate.(3)Switch MCS to one with higher raw rate.(4)Switch MCS to one with lower raw rate.(5)Switch MCS to a random one.

In order to generate a new solution, the LBTA algorithm applies one of the aforementioned operators chosen at random to the current solution. SA on the other hand uses during the whole optimization procedure only one, compound operator, a combination of operators 1, 2, and 5 so that a transition from initial to any feasible solution is possible (see Algorithm 4).

Require: Current solution compatible set
Ensure:   = neighbor( )
(1)   Choose at random and that satisfy (11)–(14).
(2)  if     then
(3)   if     then
(4)        deactivate node
(5)   else
(6)        switch MCS
(7)   end if
(8)  else
(9)       activate node
(10)      select MCS
(11) end if

4.4. Implementation Details
4.4.1. Zero Elements

In the first phase of the list-based threshold accepting algorithm the list is populated with values of relative change between two solutions . After careful consideration, we believe that including zeros in the list is a misconception. In the actual optimization procedure, that is, the second phase, the threshold value is computed only if the new solution is worse than the current one, which means that the calculated relative change will always have a positive value (). The new threshold value is compared with the largest value from the list (). Thus, we can distinguish three cases.(1): since thresholds are nonnegative from definition, in this case, the list contains all zero elements and it will not change throughout the whole procedure ( is constant). Comparing a positive threshold value against zero yields in discarding the candidate solution. The conclusions are as follows:(a)it does not matter how many zeros are in the list; the effective size of the list is equal to one;(b)the algorithm is reduced to hill climbing algorithm that accepts candidate solutions which are at least as good as the current one.(2) and : the largest (positive) threshold value from the list is replaced by a smaller (positive) threshold value . The number of zero elements in the list remains the same throughout the whole procedure and therefore is completely irrelevant to the optimization process. The effective list size is equal to the number of positive elements.(3) and : the new solution is discarded and the list remains unchanged.

The main idea behind the list is to control the diversification and intensification of the search process. In the early stage of the search, the algorithm should allow covering as much solution space as possible, which means that the thresholds in the list are expected to be large enough to make that happen. In the middle stage, the algorithm should slowly stop fostering the diversification and begin to foster the intensification of the search. In the end stage, the intensification should be the strongest; that is, the list is supposed to contain smaller and smaller threshold values, which induces the discarding of worse solution candidates. As a consequence of that, the algorithm is converging to a local or possibly even a global optimum.

4.4.2. Stopping Criterion

Even though equipped with an early termination mechanism, the LBTA algorithm does not have a solution space independent stopping criterion. If the number of subsequently discarded worse solutions is set too high, the algorithm will run for an unacceptable long time (it has been observed during preliminary tests). Hence, we propose using a global counter of iterations so that when a limit is reached, the algorithm terminates gracefully.

5. Numerical Experiments

The problem defined by the constraints (6)–(9) with the network flows as the optimization criteria was optimized with different aggregation operators: max-min, lexicographic max-min (LMM), CVaR (31)-(32), and WOWA (28)-(29). For the pricing problem (10)–(18) we applied the two approximate methods: the list-based threshold accepting (LBTA) algorithm and the simulated annealing (SA) algorithm and compared them to the exact MIP approach solved using the CPLEX 12.1 optimization package.

The numerical experiments were performed on a number of randomly generated problem instances of different sizes.

The algorithm of generating network topology instances can be described as follows. A grid of length m of points is created. Each of the grid points can be chosen to be a mesh router or a mesh gateway. First, the location of each gateway is chosen at random. Then, for each router , a location is chosen at random that satisfies condition (1) for at least one link , , and MCS . This condition is equivalent to , where is the distance between gateway and router and is the maximum distance for selected MCS . Finally, paths rooted in the gateways are established by iteratively connecting the neighboring routers that are reachable with the highest link rate and, if possible, with the lowest hop count. The specific data for different MCSs are presented in Table 2.

Although preferential weights determination is an important issue in the theory of ordered weighted averaging [3739], for the performance check simple generation methodology has been chosen. All the weights, except two, are strictly decreasing numbers with the step , while the two selected weights ( and ) differ from the previous ones by . The importance weights were generated as random values uniformly distributed in the range and then normalized.

For the LBTA algorithm we used the list size of 50000 elements. This value was chosen based on our preliminary tests for selected problem sizes.

To better compare relative performance of the LBTA and SA algorithms, the only stopping criterion for single run was reaching exactly 300000 iterations, the same for all computations and problem sizes. This way we could compare the speed and the convergence per iteration.

All the experiments were performed on the Intel Core i7 3.4 GHz microprocessor using CPLEX 12.1 optimization library for the linear master problem. The results are the average of randomly generated problems of a given size. Computing times are presented in Table 3, optimal objective value in Table 4, and the total number of the columns (compatible sets) generated (with either LBTA or SA) in Table 5. In all tables the test cases for which the timeout of 600 s occurred were marked with a “—” sign.

A note is required on the LMM optimal problem values (Table 4). Here the objective value of the last step of the LMM algorithm is given. That means that if in the previous steps only suboptimal values were reached, in the last step a better (greater) value than in the exact algorithm is possible. That is why the optimal objective values for the LMM should be treated only as a hint to the performance of the algorithm and not as an algorithm absolute quality measure.

The most noticeable advantage of LBTA algorithm over the SA algorithm when applied to the WMN problem is the computing time; in many cases the LBTA is faster than SA by an order of magnitude. The reason for such a good behavior lays not only in the computing speed of LBTA but also in the quality of the results because this affects the number of the generated columns (compatible sets). One can also notice generally better quality of the results, particularly for bigger problems, also compared to the exact MIP model.

We can also observe that the computation time increases with the number of routers. This is obvious and can easily be explained; as the network grows, the number of variables increases; hence, more compatible sets need to be computed. More interesting thing is that, for the same number of routers, the computation time is the highest for the smallest number of gateways. This can be explained by the fact that when the number of gateways is small, many paths between routers and gateways share the same link, which makes finding (feasible) compatible sets more difficult. This property has especially significant impact on the WOWA aggregation operator, for which the computation time is equal to or decreases with the number of gateways as long as the number of routers is fixed.

6. Conclusion

Effective, general purpose techniques are of considerable importance in many optimization areas. We have shown that one of the most promising algorithms is the list-based threshold accepting metaheuristic. When applied to the pricing problem of WMN optimization it gives a tremendous advantage over the classic and widely utilized simulated annealing algorithm. We have also shown that OWA-based advanced aggregation operators applied to the flow optimization in wireless mesh networks can be effectively modeled and solved when compared to the traditional lexicographic max-min operators.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The research was partially supported by the National Science Centre, Poland, under Grant no. 2011/01/B/ST7/02967 “Integer programming models for joint optimization of link capacity assignment, transmission scheduling, and routing in fair multicommodity flow networks.”