Abstract

Optimization models related to designing and operating complex systems are mainly focused on some efficiency metrics such as response time, queue length, throughput, and cost. However, in systems which serve many entities there is also a need for respecting fairness: each system entity ought to be provided with an adequate share of the system’s services. Still, due to system operations-dependant constraints, fair treatment of the entities does not directly imply that each of them is assigned equal amount of the services. That leads to concepts of fair optimization expressed by the equitable models that represent inequality averse optimization rather than strict inequality minimization; a particular widely applied example of that concept is the so-called lexicographic maximin optimization (max-min fairness). The fair optimization methodology delivers a variety of techniques to generate fair and efficient solutions. This paper reviews fair optimization models and methods applied to systems that are based on some kind of network of connections and dependencies, especially, fair optimization methods for the location problems and for the resource allocation problems in communication networks.

1. Introduction

System design and optimization often lead to diverse allocation problems where limited means must be assigned to competing agents or activities so as to achieve the best overall system performance. Depending on the context, the allocation decisions may pertain to costs, tasks, goods, or other resources that can be assigned to one or several agents (actually most allocation problems can be interpreted as resource allocation problems). Such problems arise in numerous applications of considerable complexity with system components being users, stakeholders and their coalition systems, economic and governmental institutions, policy systems, environmental systems [1, 2], and so forth. Very often complex systems that involve resource allocation can essentially be treated as systems of systems [3, 4].

The generic resource allocation problem may be stated as follows. Each activity is measured by an individual performance function that depends on the resource levels assigned to that activity. A larger function value is considered better, like in the case when the performance is measured in terms of assigned system capacity, quality of service level, service amount available, and so forth. In practical applications, one can distinguish different variants of the general allocation problem depending on whether the resource is divisible or not. In particular, one-to-one allocation of indivisible resources lead to the well-known assignment problem, while many-to-many allocation problems arise in task scheduling where a task can be assigned in parallel to several agents with each agent being potentially in charge of several tasks.

Most approaches to allocation problems are focused on efficiency-based objectives. However, the maximization of either total or average results across all relevant agents may require compromising individual agents for the good of others, as long as everyone’s good is taken impartially into account. Thus, with the increasing awareness of system inequity resulting from solely pursuing efficiency, a number of fairness or equity oriented approaches have been developed: a particular example is models of resource allocation that try to achieve some form of fairness in resource allocation patterns [5]. In general, the models relate to the optimization of systems which serve many users and the quality of service provided to every individual user defines the optimization criteria. That pattern applies among others to telecommunication and Internet networks: in those networks it is important to allocate network resources, such as available bandwidth, so as to provide equitable performance to all services and all origin-destination pairs of nodes [6, 7]. Still, there are many other pressing examples of systems where fair distribution of resources is required. Problems of efficient and fair resource allocation arise in complex systems of systems when the system combines a number of component systems such as resource supply systems, utilization systems at demand sites or users, stakeholders and their coalition systems, economic and governmental institutions, policy systems, and environmental systems. Actually, addressing fairness in particular types of systems of systems has become a great challenge of the 21 century [8] as fairly dividing limited natural resources (such as the fossil fuels, the clean water, and the environments capacity to absorb greenhouse gases) is perceived as being of utmost importance.

Essentially, fairness is an abstract sociopolitical concept that implies impartiality, justice and equity. In order to ensure fairness in a given system, all system entities have to be equally well provided with the system’s services. For example, the issue of equity is widely recognized in the analysis of locating public services, where the clients of a system are entitled to fair treatment according to community regulations. In that context, the decisions often concern the placement of service centers or other facilities at such positions that all users are treated in an equitable way with respect to certain criteria [9]. In particular, location of the facilities pertaining to public services, such as police and fire departments, and emergency medical facilities, should provide fair response time to all demand locations within a metropolitan area. Similarly, water resources should be allocated fairly [10].

As far as technical systems are concerned, the importance of fairness was early recognized with respect to problems of allocation of bandwidth in telecommunication networks [11, 12] (resulting in many models and methods of fair optimization [7]), flight scheduling [13], and allocation of takeoff and landing “slots” at airports [14]. In such areas as allocation of resources in high-tech manufacturing and optimal allocation of water and energy resources, the context of fair resource allocation was additionally enriched by considering possible substitutions among the resources; models with such substitutions are presented in [5, Ch. 4] and [1517].

In general, complex systems require mathematical programming models in order to describe the dependencies and to enable system optimization. Many such models are based on some kind of network of connections and dependencies. In particular, wide range of system models are related to some kind of network flows that express realizations of competing activities [18]. This applies to telecommunication systems, power distribution systems, transportation systems, logistics systems, and so forth. The discrete location problems can also be viewed in terms of such network system [19, 20].

The general purpose of this paper is to review fair optimization models and algorithms supporting efficient and fair resource allocation in problems related to such network models. The particular focus is on location-allocation problems and allocation problems related to communication networks since in those areas the fair optimization concepts have been extensively developed and widely applied.

The paper is organized as follows. In the next section we present methodological foundations of fair optimization models. In Section 3, the most important models and methods of fair optimization in communication networks are reviewed. Section 4 aims at reviewing applications of fairness optimization in location and allocation problems. The computational complexity issues are addressed in Section 5. The paper is concluded by addressing the most important directions of the development of fair optimization methodology for network systems.

2. Fairness, Equity, and Fair Optimization

2.1. Efficiency and Equity

The generic allocation problem deals with a system comprising a set of services (activities, agents) and a given set of allocation patterns (allocation decisions). For each service , a function of the allocation pattern is defined. This function measures the outcome (effect) of allocation pattern for service . In applications we consider this measure that usually expresses the service quality. In general, outcomes can be measured (modeled) as service time, service costs, and service delays as well as in a more subjective way. In typical formulations a larger value of the outcome means a better effect (higher service quality or client satisfaction). Otherwise, the outcomes can be replaced with their complements to some large number. Therefore, without loss of generality, we can assume that each individual outcome is to be maximized which allows us to view the generic resource allocation problem as a vector maximization model. Consider where is a vector-function that maps the decision space into the criterion space and denotes the feasible set. We consider complex systems represented by mathematical programming models and specifically models based on some network of connections and dependencies.

An outcome vector is attainable if it expresses outcomes of a feasible solution (i.e., ). The set of all the attainable outcome vectors is denoted by . Note that, in general, convexity of the feasible set and concavity of the outcome function do not guarantee convexity of the corresponding attainable set . Nevertheless, the multiple criteria maximization model (1) can be rewritten in the equivalent form where the attainable set is convex whenever is convex and functions are concave.

Model (1) only specifies that we are interested in maximization of all objective functions for . In order to make it operational, one needs to assume some solution concept specifying what it means to maximize multiple objective functions. The solution concepts may be defined by properties of the corresponding preference model [21]. The commonly used concept of the Pareto-optimal solutions, as feasible solutions for which one cannot improve any criterion without worsening another, depends on the rational dominance which may be expressed in terms of the vector inequality.

Simple solution concepts for multiple criteria problems are defined by aggregation (or utility) functions to be maximized. Thus, the multiple criteria problem (1) is replaced with the maximization problem. Consider In order to guarantee the consistency of the aggregated problem (3) with the maximization of all individual objective functions in the original multiple criteria problem (or Pareto-optimality of the solution), the aggregation function must be strictly increasing with respect to every coordinate.

The simplest aggregation functions commonly used for the multiple criteria problem (1) are defined as the total outcome , equivalently as the mean (average) outcome or alternatively as the worst outcome . The mean (total) outcome maximization is primarily concerned with the overall system efficiency. As based on averaging, it often provides a solution where some services are discriminated in terms of performance. On the other hand, the worst outcome maximization, that is, the so-called max-min solution concept, is regarded as maintaining equity. Indeed, in the case of a simplified resource allocation problem with knapsack constraints, the max-min solution, takes the form for all , thus, meeting the perfect equity requirement . In the general case, with possible more complex feasible set structure, this property is not fulfilled [22, 23]. Nevertheless, if there exists a Pareto-optimal vector satisfying the perfect equity requirement , then is the unique optimal solution of the max-min problem (4) [24].

Actually, the distribution of outcomes may make the max-min criterion partially passive when one specific outcome is relatively very small for all the solutions. For instance, while allocating clients to service facilities, such a situation may be caused by existence of an isolated client located at a considerable distance from all the facilities. Maximization of the worst service performances is then reduced to maximization of the service performances for that single isolated client leaving other allocation decisions unoptimized. For instance, having four outcome vectors , , , and available, they are all optimal in the corresponding max-min optimization, as the third outcome cannot be better than 1. Maximization of the first and the second outcome is then not supported the max-min solution concept, allowing one to select as the optimal solution. This is a clear case of inefficient solution where one may still improve other outcomes while maintaining fairness by leaving at its best possible value the worst outcome. The max-min solution may be then regularized according to the Rawlsian principle of justice. Rawls [25, 26] considers the problem of ranking different “social states” which are different ways in which a society might be organized taking into account the welfare of each individual in each society, measured on a single numerical scale. Applying the Rawlsian approach, any two states should be ranked according to the accessibility levels of the least well-off individuals in those states; if the comparison yields a tie, the accessibility levels of the next-least well-off individuals should be considered, and so on. Formalization of this concept leads us to the lexicographic maximin optimization model or the so-called max-min fairness where the largest feasible performance function value for activities with the smallest (i.e., worst) performance function value (this is the maximin solution) are followed by the largest feasible performance function value for activities with the second smallest (i.e., second worst) performance function value, without decreasing the smallest value, and so forth. The lexicographic maximin solution is known in the game theory as the nucleolus of a matrix game. It originates from an idea, presented by Dresher [27], to select from the optimal (max-min) strategy set of a player a subset of optimal strategies which exploit mistakes of the opponent optimally. It has been later refined to the formal nucleolus definition [28] and generalized to an arbitrary number of objective functions [29]. The concept was early considered in the Tschebyscheff approximation [30] as a refinement taking into account the second largest deviation, the third one and further to be hierarchically minimized. Actually, the so-called strict approximation problem on compact ordered sets is resolved by introducing sequential optimization of the norms on subspaces. Luss and Smith [31] published the first paper on lexicographic maximin approach for resource allocation problems with continuous variables and multiple resource constraints. Within the communications or network applications the lexicographic maximin approach has appeared already in [11, 12] and now under the name max-min fair (MMF) is treated as one of the standard fairness concepts [7]. The lexicographic maximin has been used for general linear programming multiple criteria problems [3234], as well as for specialized problems related to multiperiod resource allocation with and without substitutions [5, Ch. 5] and [3539].

In discrete optimization it has been considered for various problems [40, 41] including the location-allocation ones [42]. Luss [43] presented an expository paper on equitable resource allocations using a lexicographic minimax (or lexicographic maximin) approach while [44] provides wide discussion of various models and solution algorithms in connection with communication networks. The recent book by Luss [5] brings together much of the equitable resource allocation research from the past thirty years and provides current state of art in models and algorithm within wide gamut of applications.

Actually, the original introduction of the MMF in networking characterized the MMF optimal solution by the lack of a possibility to increase of any outcome without decreasing of some smaller outcome [12]. In the case of convex attainable set (as considered in [12]) such a characterization represents also lexicographic maximin solution. In nonconvex case as pointed out in [45] such strictly defined MMF solution may not exist while the lexicographic maximin always exists and it covers the former if it exists (see [46] for wider discussion). Therefore, the MMF is commonly identified with the lexicographic maximin while the classical MMF definition is considered rather as an algorithmic approach which is applicable only for convex models. We follow this in the remainder of the paper. Indeed, while for convex problems it is relatively easy to form sequential algorithms to execute lexicographic maximin by recursive max-min optimization with fixed smallest outcomes (see [5, 3133, 43, 44, 46, 47]), for nonconvex problems the sequential algorithms must be built with the use of some artificial criteria (see [24, 40, 42, 44, 48] and [5, Ch. 7]). Some more discussion is provided in Section 2.4.

2.2. From Equity to Fair Optimization

The concept of fairness has been studied in various areas beginning from political economics problems of fair allocation of consumption bundles [25, 4952] to abstract mathematical formulation [53, 54]. Fairness is, essentially, an abstract sociopolitical concept of distributive justice that implies impartiality and equity in distribution of goods. In order to ensure fairness in a system, all system entities have to be equally well provided with the system’s services. Therefore, in systems analysis and operational research fairness was usually quantified with the so-called inequality measures to be minimized [5560] or fairness indices [61, 62]. Typical inequality measures are some deviation type dispersion characteristics. They are inequality relevant which means that they are equal to 0 in the case of perfectly equal outcomes while taking positive values for unequal ones. The simplest inequality measures are based on the absolute measurement of the spread of outcomes or deviations from the mean, like the mean absolute difference, maximum absolute difference, standard deviation (variance), mean absolute deviation, and so forth. Relative inequality measures are frequently used. For instance, measures are normalizezd by mean outcome like the Gini coefficient, which is the relative mean difference.

Complex systems require usually mathematical programming models in order to describe the dependencies and to make possible system optimization. Many such models are based on some network of connections and dependencies. A wide range of systems models is related to some flows within a network expressing realizations of competing activities [18]. This applies to communication systems, power distribution systems, transportation systems, logistics systems, and so forth. Among others the discrete location problems can be viewed in terms of such network system [19, 20]. Typically fairness is considered in relation to division of a given amount (the cake division problem) imposing a consistency requirement, the reference points must sum to the total amount available to the agents. A methodology capable to model and solve fair allocation problems in the context of system optimization must take into account possible increase of the amount. Unfortunately, direct minimization of typical inequality measures contradicts the maximization of individual outcomes and it may lead to inferior decisions. The max-min fairness represented by lexicographic maximin optimization meets such needs. This specific concept may be generalized to concepts of fairness expressed by the equitable optimization [9, 24, 43, 6365] representing inequality averse optimization rather than inequality minimization. Since the term equitable optimization or equitable resource allocation is frequently used as limited to the lexicographic maximin optimization (see [5]), we use the term fair optimization to express wider class of equitable approaches.

The concept of fair optimization is a specific refinement of the Pareto-optimality taking into account the inequality minimization according to the Pigou-Dalton approach. First of all, the fairness requires impartiality of evaluation, thus, focusing on the distribution of outcome values while ignoring their ordering. That means that, in the multiple criteria problem (1), we are interested in a set of outcome values without taking into account which outcome is taking a specific value. Hence, we assume that the preference model is impartial (anonymous, symmetric). In terms of the preference relation it may be written as the following axiom: which means that any permuted outcome vector is indifferent in terms of the preference relation. Further, fairness requires equitability of outcomes which causes that the preference model should satisfy the (Pigou-Dalton) principle of transfers. The principle of transfers states that a transfer of any small amount from an outcome to any other relatively worse-off outcome results in a more preferred outcome vector. As a property of the preference relation, the principle of transfers takes the form of the following axiom: The rational preference relations satisfying additionally axioms (6) and (7) are called hereafter fair (equitable) rational preference relations. We say that outcome vector fairly (equitably) dominates , if and only if is preferred to for all fair rational preference relations. In other words, fairly dominates , if there exists a finite sequence of vectors such that , and is constructed from by application of either permutation of coordinates, equitable transfer, or increase of a coordinate. An allocation pattern is called fairly (equitably) efficient or simply fair if is fairly nondominated. Note that each fairly efficient solution is also Pareto-optimal but not vice verse.

In order to guarantee fairness of the solution concept (3), additional requirements on aggregation (utility) functions need to be introduced. The aggregation function must be symmetric, that is, for any permutation of , as well as being equitable (to satisfy the principle of transfers) for any . Such functions were referred to as (strictly) Schur-concave [66]. In the case of a strictly increasing and strictly Schur-concave function, every optimal solution to the aggregated optimization problem (3) defines some fairly efficient solution of allocation problem (1) [64].

Both simplest aggregation functions, the mean and the minimum, are symmetric although they do not satisfy strictly the equitability requirement. For any strictly concave and strictly increasing utility function , the aggregation function is a strictly monotonic and equitable, thus, defining a family of the fair aggregations [64]. Consider

Various concave utility functions can be used to define the fair aggregations (8) and the resulting fair solution concepts. In the case of the outcomes restricted to positive values, one may use logarithmic function, thus, resulting in the proportional fairness (PF) solution concept [67, 68]. Actually, it corresponds to the so-called Nash criterion [69] which maximizes the product of additional utilities compared to the status quo. Again, in the case of a simplified resource allocation problem with knapsack constraints, the PF solution, takes the form for all , thus, allocating the resource inversely proportional to the consumption of particular activities.

For positive outcomes a parametric class of utility functions, may be used to generate various fair solution concepts for [70]. The corresponding solution concept (8), called -fairness, represents the PF approach for , while with tending to the infinity it converges to the MMF. For large enough one gets generally an approximation to the MMF while for discrete problems large enough guarantee the exact MMF solution. Such a way to identify the MMF solution was considered in location problems [40, 42] as well as to content distribution networking problems [71, 72]. However, every such approach requires to build (or to guess) a utility function prior to the analysis and later it gives only one possible compromise solution. For a common case of upper bounded outcomes one may maximize power functions for which is equivalent to minimization of the corresponding -norm distances from the common upper bound [64].

Figure 1 shows the structure of fair dominance for two-dimensional outcome space. For any outcome vector , the fair dominance relation distinguishes set of dominated outcomes (obviously worse for all fair rational preferences) and set of dominating outcomes (obviously better for all fair rational preferences). Some outcome vectors remain neither dominated nor dominating (in white areas) and they can be differently classified by various specific fair solution concepts. The lexicographic maximin assigns the entire interior of the inner white triangle to the set of preferred outcomes while classifying the interior of the external open triangles as worse outcomes. Isolines of various utility functions split the white areas in different ways. For instance, there is no fair dominance between vectors and and the MMF considers the latter as better while the proportional fairness points out the former. On the other hand, vector fairly dominates and all fairness models (including MMF and PF) prefer the former. One may notice that the set of directions leading to outcome vectors being dominated by a given is, in general, not a cone and it is not convex. Although, when we consider the set of directions leading to outcome vectors dominating given we get a convex set.

Certainly, any fair solution concept usually leads to some deterioration of the system efficiency when comparing to the sole efficiency optimization. This is referred to as the price of fairness and it was quantified as the relative difference with respect to a fully efficient solution that maximizes the sum of all performance functions (total outcome) [73], that is, the price of fairness concept on the attainable set is defined as where is the outcome vector maximizing the total outcome on while denotes the outcome vector maximizing the fair optimization concept on . Formula (11) is applicable only to the problems with a positive total outcome—this, however, is a common case for attainable sets of models based on some network of connections and dependencies. Bertsimas et al. [73] examined the price of fairness for a broad family of problems, focusing on PF and MMF models. They shown that for any compact and convex attainable sets with equal maximum achievable outcome, which are greater than 0, the price of proportional fairness is bounded by and the price of max-min fairness is bounded by Moreover, the bound under PF is tight if is integer, and the bound under MMF is tight for all . Similar analysis for the -fairness [74] shows that the price of -fairness is bounded by The price of fairness strongly depends on the attainable set structure. One can easily construct problems where any fair solution is also maximal with respect to the total outcome (no price of fairness occurs). In [75], the -fairness concept for network flow problems was analyzed and a class of networks was generated with the property that a fairer allocation is always more efficient. In particular, it implies that max-min fairness may achieve higher total throughput than proportional fairness.

2.3. Multicriteria Models

The relation of fair dominance can be expressed as a vector inequality on the cumulative ordered outcomes [63]. The latter can be formalized as follows. First, we introduce the ordering map such that , where and there exists a permutation of set such that for . Next, we apply cumulation to the ordered outcome vectors to get the following quantities: expressing, respectively, the worst outcome, the total of the two worst outcomes, and the total of the three worst outcomes. Pointwise comparison of the cumulative ordered outcomes for vectors with equal means was extensively analyzed within the theory of equity [76] or the mathematical theory of majorization [66], where it is called the relation of Lorenz dominance or weak majorization, respectively. It includes the classical results allowing to express an improvement in terms of the Lorenz dominance as a finite sequence of Pigou-Dalton equitable transfers. It can be generalized to vectors with various means, which allows one to justify the following statement [63, 77]. Outcome vector fairly dominates , if and only if for all where at least one strict inequality holds.

Fair solutions to problem (1) can be expressed as Pareto-optimal solutions for the multiple criteria problem with objectives . Consider Hence, the multiple criteria problem (16) may serve as a source of fair solution concepts. Note that the aggregation maximizing the mean outcome corresponds to maximization of the last objective in problem (16). Similarly, the max-min corresponds to maximization of the first objective . As limited to a single criterion they do not guarantee the fairness of the optimal solution. On the other hand, when applying the lexicographic optimization to problem (16), one gets the lexicographic maximin solution concept, that is, the classical equitable optimization model [5] representing the MMF.

For modeling various fair preferences one may use some combinations of the criteria in problem (16). In particular, for the weighted sum aggregation on gets , which can be expressed with weights allocated to coordinates of the ordered outcome vector, that is, as the so-called ordered weighted average (OWA) [78, 79]: If weights are strictly decreasing and positive, that is, , then each optimal solution of the OWA problem (18) is a fairly efficient solution of (1). Such OWA aggregations are sometimes called ordered ordered weighted averages (OOWA) [80]. When looking at the structure of fair dominance (Figure 1), the piece-wise linear isolines of the OOWA split the white areas of outcome vectors remaining neither dominated nor dominating (cf. Figure 2).

When differences between weights tend to infinity, the OWA model becomes the lexicographic maximin [81]. On the other hand, with the differences between subsequent monotonic weights approaching 0, the OWA model tends to the mean outcome maximization while still preserving fair optimizations properties (cf. Figure 3).

To the best of our knowledge, the price of fairness related to the fair OWA models has not been studied till now. The OWA aggregation may model various preferences from the max to the min. Yager [78] introduced a well appealing concept of the andness measure to characterize the OWA operators. The degree of andness associated with the OWA operator is defined as For the min aggregation representing the OWA operator with weights one gets while for the max aggregation representing the OWA operator with weights one has . For the total (mean) outcome one gets . OWA aggregations with andness greater than 1/2 are considered fair, and fairer when andness gets closer to 1. A given andness level does not define a unique set of weights . Various monotonic sets of weights with a given andness measure may be generated (cf., [82, 83] and references therein).

The definition of quantities is complicated as requiring ordering. Nevertheless, the quantities themselves can be modeled with simple auxiliary variables and linear constraints. Although, maximization of the th smallest outcome is a hard (combinatorial) problem. The maximization of the sum of smallest outcomes is a linear programming (LP) problem as where is an unrestricted variable [84, 85]. This allows one to implement the OWA optimization quite effectively as an extension of the original constraints and criteria with simple linear inequalities [86] (without binary variables used in the classical OWA optimization models [87]) as well as to define sequential methods for lexicographic maximin optimization of discrete and nonconvex models [48]. Various fairly efficient solutions of (1) may be generated as Pareto-optimal solutions to multicriteria problem:

Recently, the duality relation between the generalized Lorenz function and the second order cumulative distribution function has been shown [88]. The latter can also be presented as mean shortfalls (mean below-target deviations) to outcome targets : It follows from the duality theory [88] that one may completely characterize the fair dominance by the pointwise comparison of the mean shortfalls for all possible targets. Outcome vector fairly dominates , if and only if for all where at least one strict inequality holds. In other words, the fair dominance is equivalent to the increasing concave order more commonly known as the Second Stochastic Dominance (SSD) relation [89].

For -dimensional outcome vectors we consider, all the shortfall values are completely defined by the shortfalls for at most different targets representing values of several outcomes while the remaining shortfall values follow from the linear interpolation. Nevertheless, these target values are dependent on specific outcome vectors and one cannot define any universal grid of targets allowing to compare all possible outcome vectors. In order to take advantages of the multiple criteria methodology one needs to focus on a finite set of target values. Let denote the all attainable outcomes. Fair solutions to problem (1) can be expressed as Pareto-optimal solutions for the multiple criteria problem with objectives . Consider Hence, the multiple criteria problem (22) may serve as a source of fair solution concepts. When applying the lexicographic minimization to problem (22) one gets the lexicographic maximin solution concept, that is, the classical equitable optimization model [5] representing the MMF. However, for the lexicographic maximin solution concept one simply performs lexicographic minimization of functions counting outcomes not exceeding several targets [42, 48].

Certainly in many practical resource allocation problems one cannot consider target values covering all attainable outcomes. Reducing the number of criteria we restrict opportunities to generate all possible fair allocations. Nevertheless, one may still generate reasonable compromise solutions [24]. In order to get a computational procedure one needs either to aggregate mean shortages for infinite number of targets or to focus analysis on arbitrarily preselected finite grid of targets. The former turns out to lead us to the mean utility optimization models (8). Indeed, classical results of majorization theory [66] relate the mean utility comparison to the comparison of the weighted mean shortages. Actually, the maximization of a concave and increasing utility function is equivalent to minimization of the weighted aggregation with positive weights (due to concavity of the second derivative is negative).

2.4. Methodologies for Solving Lexicographic Maximin Problems

Consider the following resource allocation problem: where the performance functions are strictly increasing and continuous, and , for all and . The lexicographic maximization objective function, jointly with the ordering constraints, defines the lexicographic maximin objective function (this is equivalent to defining the objective function using the ordering mapping ). Consider Figure 4 which presents a network that serves point-to-point demands between nodes 1 and 2, nodes 3 and 4, and nodes 3 and 5. The numbers on the links are the link capacities, for example, 4 Gbs on links (1, 3). Suppose demand between a node-pair can be routed only on a single path, where this path is given as part of the input; for example, the path selected between nodes 1 and 2 uses links (1, 3) and (3, 2). The problem of finding the lexicographic maximin solution of demand throughputs between various node-pairs subject to link capacity constraints (which serve as the resource constraints) can be formulated by (23a)–(23d).

It turns out that for various performance functions, such as linear functions and exponential functions, the lexicographic maximin solution of (23a)–(23d) is obtained by simple algebraic manipulations of closed-form expressions and the computational effort is polynomial. This facilitates solving very large problems in negligible computing time. For other functions, where the solution cannot be derived using closed-form expressions, somewhat more computations are required, in particular, function evaluations complemented by a one-dimensional numerical search are employed (see [5, Ch. 3] and [31, 90, 91]). Algorithms for problem (23a)–(23d) serve as building blocks for more complex problems such as for problems with substitutable resources, for multiperiod problems, and for content distribution problems (see [5, Chs. 4–6]).

Now, consider the cases of performance functions that are nonseparable, where each of the functions in (23a) and (23b) is replaced by , thus, depending on multiple decision variables. Consider Figure 5 which shows three possible paths for the demand between nodes 1 and 2. The throughput between this node-pair is simply the sum of flows along these three paths.

Even for linear performance functions (e.g., throughputs in communication networks) the computational effort is significantly larger as the algorithm for finding the lexicographic maximin solution requires solving repeatedly linear programming problems (see [5, Chs. 3.4, and 6.2], [7, Ch. 8], and [32, 33, 44, 92]).

Next, consider the case of a nonconvex feasible region, for example, with discrete decision variables. For example, consider a communication network (as in Figure 5) where the demand between any node-pair can flow along multiple paths, but only one of these paths may be selected (here the selected path for each demand is a decision variable). The resulting formulation includes 0-1 decision variables [7]. Again, the objective is to find the lexicographic maximin solution of the throughputs where each demand uses only one path. All the solution methods above do not apply. If the number of possible distinct outcomes is small, one can construct counting functions, where the th counting function value is the number of times the th distinct worst outcome appears in the solution. That means that one introduces functions with expressing the number of values in the outcome vector . The lexicographic maximin optimization problem is then replaced by lexicographic minimization of the counting functions which is solved by repeatedly solving minimization problems with discrete variables: where is a sufficiently large constant (see [5, Ch. 7.2] and [44, 48, 93]). Moreover, in general, binary variables may be eliminated if large numbers of auxiliary continuous variables and constrains are added leading to the formulation based on (22) (see [5, Ch. 7.2] and [44, 48, 93, 94]).

When the number of distinct outcomes is large, we can solve the lexicographic maximin problem by solving lexicographic maximization problems in the format of problems (20a)–(20d) (see [5, Ch. 7.3] and [44, 48, 64, 9496]). Again, the solution method adds many auxiliary variables and constraints to the formulation.

3. Fairness in Communication Networks

3.1. Fairness and Traffic Efficiency

Fairness issues in communication networks become most profound when dealing with traffic handling. Roughly speaking, whenever the capacity of network resources such as links and nodes is not sufficient to carry the entire offered traffic, a part of the traffic must be rejected. Then a natural question arises: how the total carried traffic traffic should be shared between the network users in a fair way, at the same time assuring acceptable overall traffic carrying efficiency. This kind of problems arise, for example, in the Internet for elastic traffic sources which, from mathematical point of view, can be treated as generating infinite traffic. Thus, the total traffic that can eventually be carried by the network should be fairly split into the traffic flows assigned to individual demands. This issue is illustrated by the following example [7].

Example 1. Consider a simple network composed of two links in series depicted in Figure 6. There are three nodes (), two links (), and three demand pairs (, , ). The demands generate elastic traffic, that is, each of them can consume any bandwidth assigned to its path. Suppose that the capacity of the links is the same and equal to (). Let be the path-flows (bandwidth) assigned to demands , respectively. Clearly, such a flow assignment is feasible if and only if and . For the three basic traffic objectives the solutions are as follows: (i)max-min fairness ,(ii)proportional fairness , , and(iii)throughput maximization , .
Above denotes the throughput, that is, . Clearly, the MMF solution is perfectly fair from the demand viewpoint but at the same the worst in terms of throughput. This is because the “long” demand , consuming bandwidth on both links, gets the same flow as the “short” demands , , each consuming bandwidth on its direct link. The PF solution increase the flow of short demands at the expense of the long demand. This is acceptably fair for the demands and increases the throughput. Finally, the maximization solution is unfair (the long demand gets nothing) but, by assumption, maximizes the throughput.
Note that in this example the price of max-min fairness calculated according to formula (11) is 1/4 which is equal to the upper bound (13). Similarly, the price of proportional fairness 1/6 is close to its upper bound (12). However, the price of fairness strongly depends on the network topology. In [75], the authors demonstrate a class of networks such that an -fair allocation with higher is always more efficient in terms of total throughput. In particular, this implies that max-min fairness may achieve higher throughput than proportional fairness.

In the networking literature related to fairness, the above MMF and PF objectives are the most popular. The throughput maximization objective is rarely used, as totally unfair. Instead, a reasonable modification consisting in lexicographical maximization of the two ordered criteria is used, where denotes the minimal element of the demand vector .

Considering MMF, besides optimization objectives directly related to traffic handling, objectives related to link loads, are commonly considered in communication network optimization. In this case, the traffic volumes of demands to be realized are fixed. We shall come back to this issue later on.

3.2. Generic Optimization Models

The considered network is modeled with a graph , undirected or directed, composed of the set of nodes and the set of links . Thus, each link represents an unordered pair (undirected graphs) or an ordered pair (directed graphs) of nodes and is assigned the nonnegative unit capacity cost which is a parameter and the maximum capacity which is a given constant (possibly equal to ). When link capacities are subject to optimization, they become optimization variables denoted by . The cost of the network is given by the quantity . The traffic demands are represented by the set . Each demand is characterized by a directed pair , composed of the originating node and the terminating node , and a minimum value (a parameter, possibly equal to ) of the traffic volume that has to be carried from to . Demand volumes and link capacities are expressed in the same units.

Each demand has a specified set of admissible paths (called the path-list) composed of selected elementary paths from to in graph . (Recall that an elementary path does not traverse any node more than once). Paths in , used to realize the demand (traffic) volumes, are assigned flows , which are optimization variables. Each value specifies the reference capacity (expressed in the same units as link capacity and demand volume) reserved on path . The set of all admissible paths is denoted by . The maximum path-lists, that is, path-lists containing all elementary paths from to , will be denoted by , , with . The set of all paths in traversin a simple network composed of two links in series depicted in Figure 6. There are three nodes (), two links () and three demand pairs (, , ). The demands generate elastic traffic, that is, each of them can consume any bandwidth assigned to its path. Suppose that the capacity of the links is the same and equal to (). Let be the path-flows (bandwidth) assigned to demands , respectively. a given link will be denoted by . Note that in an undirected graph the links can be traversed by paths in both directions while in a directed graph—only in the direction of the link.

Let denote the total flow assigned to demand , that is, traffic of demand carried in the network, and let . Besides, let be the link load induced by the path-flows. Then, the generic feasibility set (optimization space) of a traffic allocation problem (TAP) can be specified as follows:The set specifies the domain of a path-flow variable and is problem-dependent. Two typical cases are and . Note that in the undirected graph the path-flows through a link sum up to the link load no matter in which direction they traverse the link.

The three cases of TAP considered in Example 1 above can be now formulated as follows:(i)TAP/MMF: subject to (25a)–(25e),(ii)TAP/PF: subject to (25a)–(25e), and(iii)TAP/TM: subject to (25a)–(25e). Observe that the third case above is actually different from the third case considered in Example 1 as now throughput maximization is the secondary objective in lexicographical maximization.

When , all the three problems are convex and as such can be approached effectively by means of the algorithms described in [7, 44, 46]. For the TAP/PF version see [67]. In fact, TAP/TM is a two level linear program possibly combined to a single LP [23], and TAP/MMF can be solved as a series of linear programs [32, 33, 44, 97]. Optimization approaches to TAP/PF are presented in [67].

Certainly, the feasible set (25a)–(25e) can be further constrained to consider more restricted routing strategies. The most common restriction is imposed by the single-path requirement that each is carried entirely on one selected path. Then the feasibility set must be augmented by the following constraints: In (26a)–(26c), are additional binary routing variables, and is a “big ” constant. In this setting the above defined TAP problems become essentially mixed-integer programming problems (FTP/PF after a piece-wise approximation of the logarithmic function), and in the case of MMF must be treated by the general approach described in Section 2.3 as problem (20a)–(20d) (see also [44, 48, 64, 9496] and [5, Ch. 7.3]).

We note that when the routing paths are fixed, that is, when , , then TAP/MMF becomes the classical fair allocation (equitable resource allocation) problem considered in Section 2.4 (see [12, Sec. 6.5.2] and [5, Ch. 6.1]). This version of the problem can be efficiently solved in polynomial time by the so called water-filling algorithm based on the bottleneck link characterization of the problem (see [45] and Section 3.7). In fact, the bottleneck characterization of this TAP/MMF problem can be directly formulated as an integer programming problem (with binary variables) as demonstrated in [92]. The modular flow version of the problem is considered in [98].

An interesting version of the single-path TAP/MMF problem is considered in [99] that uses the bottleneck formulation of [92]. In that problem, the routes are optimized so to achieve the maximum traffic throughput while maintaining the MMF demand traffic assignment.

The above specified problems use the noncompact link-path formulation where the optimization variables are related to the routing paths. Hence, when we wish to consider all possible elementary paths then the number of variables , becomes exponential with the size of the network. In this case path generation algorithm should be applied (this is easy in the case of linear programs) or the problems should be reformulated in the node-link notation using link-flow variables instead of the path-flow variables used in (25a)–(25e).

3.3. Selected Specific Models

In this section we will discuss several specific network optimization models related to various aspects of fairness. An interesting case arise when the traffic demands , are considered as given and the design objective is to balance the load of the links, aiming at minimizing the average packet delay in the network. The commonly known formulation of such load balancing is as follows:Using the MMF notion it is easy to define a load balancing problem, that is stronger than problem (27a)–(27d) which in fact find the maximum element of the MMF vector expressing the relative link loads: Some variants of the problem given by (28a)–(28d) were studied in [100, 101].

Another version of the MMF load balancing problem (28a)–(28d) maximizes the unused link capacity in a fair way, relevant to circuit switching:

Above we have considered flow allocation problems assuming given link capacity. When the link capacity is subject to optimization, that is, when we simultaneously optimize path-flows and link capacities, then we deal with dimensioning problems. An example of such a problem (with a budget constraint) is as follows:where is a given budget for the total link cost. Note that we have skipped constraint (25b) which has established a lower bound on the demand traffic allocation in formulation (25a)–(25e). If no additional constraints are enforced (as (25b)) then the optimal solution of (30a)–(30e) is trivial. For each demand , the optimal traffic is the same and realized on the cheapest path with respect to the cost . Clearly When the PF objective, instead of the MMF objective (30a) is considered, then the optimal solution is as follows (see [7, 68, 102]): so the total optimal flow allocated to demand is inversely proportional to the cost of its shortest path (and allocated to this path).

More complicated optimization problems including link dimensioning were treated in [7, Ch. 13] (see also [103, 104]). For the MMF optimization problems related to wireless networks (in particular, to Wireless Mesh Networks) the reader can refer to [105].

3.4. Extended Fairness Objectives

While the MMF and PF objectives are the most popular in the networking literature related to fairness, there are also attempts to find various fair solutions taking advantages of the multicriteria fair optimization models presented in Section 2.3. In particular, the OWA aggregation (18) was applied to the network dimensioning problem for elastic traffic [95] as well as to the flow optimization in wireless mesh networks [106].

Example 2. Consider the simple network from Example 1 composed of two links in series depicted in Figure 6. There are three demand pairs (, , ) generating elastic traffic, where are the path-flows (bandwidth) assigned to demands , respectively. Note that the ordered OWA maximization with decreasing weights results in bandwidth allocation , , , thus, representing the maximum throughput. Ordered OWA maximization with decreasing weights results in bandwidth allocation , , which is the MMF solution.

It was demonstrated that allocations representing the classical fairness concepts (MMF and PF) were easy to achieve [95]. On the other hand, in order to find a larger variety of new compromise solutions it was necessary to incorporate some scaling techniques originating from the reference point methodology. Actually it is a common flaw of the weighting approaches that they provide poor controllability of the preference modeling process and in the case of multicriteria problems with discrete (or more general nonconvex) feasible sets, they may fail to identify several compromise efficient solutions. In standard multicriteria optimization, good controllability can be achieved with the direct use of the reference point methodology [107] based on reservation and aspiration levels for each of the activities. The reservation levels are the required activity levels, whereas the aspiration levels are the desired levels, commonly referred to as reference points. The reference point methodology applied to the cumulated ordered outcomes (16) was tested on the problem of network dimensioning with elastic traffic [96, 108]. The tests confirmed the theoretical advantages of the method. Various (compromise) fair solutions for both continuous and modular problems could be easily generated.

Multiple criteria model of the mean shortfalls to all possible targets (22) when applied to network dimensioning problem for elastic traffic results in a model with criteria that measure actual network throughput for various levels (targets) of flows [109]. Thereby, the criteria can easily be introduced into the model. Experiments with the reference point methodology applied to the multiple target throughput model confirmed the theoretical advantages of the method. Various (compromise) fair solutions were easily generated despite the fact that the single path problem (discrete one) was analyzed.

Both the multiple criteria models with the lexicographic optimization of directly defined artificial criteria introduced with some auxiliary variables and linear inequalities provides corresponding implementations for the MMF optimization independently from the problem structure. The approaches guarantee the exact MMF solution for a complete set of criteria and their applicability is limited to rather small networks. In [94] there were developed some simplified sequential approaches with reduced number of criteria, thus, generating effectively approximations to the MMF solutions. Computational analysis on the MMF single-path network dimensioning problems showed the approximated models allowed to solve within a minute problems for networks with 30 nodes and 50 links providing very small approximation errors, thus, suggesting possible usage in many practical applications.

3.5. Fairness on the Session Level

One of the major challenges of the Internet is to provide high performance of data transport. Basically, the problem is how to obtain high utilization of network resources and to ensure required quality of communications services. Those two goals result in a potential trade-off as when the amount of data sent through the network is too high, links become overloaded and the quality of service deteriorates.

The overload occurs when the amount of data loading the outgoing link of the Internet router is higher than the one that can actually be carried. When that happens the link’s queue of packets becomes longer, and potentially the queue’s buffer finally overflows. That causes the increase of packet delay and delay variations and may also cause packet loss. Both phenomena are perceived by the pair of communicating Internet applications as low quality of data transport.

Let be the set of Internet sessions, which are packet flows between pairs of Internet applications. Let function define the average packet length of the session expressed in bits, and for each , let variable denote the packet rate of session . Then, for each , is an average bit-rate of session .

Let be the set of network links, and for each , let denote the set of links that are used by session , and for each , let denote the set of sessions that use link . Then the load of link is equal to . Let function denote the capacity (the bit-rate) of the link. The following constraint expresses the fact that the total load of any link cannot be greater than the link’s capacity. Consider The overload of the Internet’s link is a very common situation. The links can become overloaded for a number of reasons: when the amount of traffic entering the network becomes significantly larger, when links lose some capacity due to failures, or when they fail completely and the packet flows must be rerouted to some other links that do not have sufficient capacity. Thus, solving the trade-off between utilization and quality of service requires effective mechanisms of handling overload. That is, the place when the concept of fairness is used.

The data between a pair of applications in the Internet can be conveyed using one of two transport protocols, user datagram protocol (UDP) and transport control protocol (TCP). While the UDP is a connectionless data transport protocol, where each data packet is sent individually and there is no interaction between the sending and the receiving application, the TCP protocol is connection-oriented, which means that packets are sent within a connection that must be organized between the sending and the receiving application before the data can be sent, and can be torn down only after the last packet has been delivered. Due to the connection-oriented character of the TCP flows there is an association between the two applications which allows them to control the packet rate.

With the flow control mechanisms of the TCP protocol the rate at which packets are sent is adapted to network conditions: if the amount of available bandwidth is large, packet rate is being increased, and when the links become overloaded the rate is decreased, thus, reducing the overload. The packet rate of the TCP session increases every time the sender application receives an acknowledgement that a packet has reached the destination, and the rate is decreased every time a packet is lost. While the increase is linear, the decrease is geometrical, which helps to ease congestion quickly. In a reactive scenario, the packet is lost when the packet buffer is saturated. In the proactive scenario, to avoid uncontrolled congestion, the random early discard (RED) mechanism of the router can be activated that discards randomly selected packets. However, in both cases a random packet is lost and a randomly selected session is affected.

Arguably, the higher the packet rate of a session the higher the probability that packets of the session will be dropped and the packet rate of the session will be reduced. Thus, if a number of sessions have their packet rate reduced due to congestion of a given link, none of the sessions is supposed to generate packets at an average rate higher than the other sessions. For each , let variable denote the maximum packet rate on link . Noticeably, there is some maximum rate at which a particular application can generate packets; let function define the maximum achievable packet rate of the session. Thus, the packet rate of the session must, potentially, satisfy the following condition: Due to (35) the bandwidth of a single link is shared in a fair way. If a link is saturated, every session attains the same packet rate , unless that rate is higher than the maximum achievable rate of that session. Thus, the session cannot have packet rate higher than any other session unless the other session’s maximum achievable rate is lower than . And only if a link is not saturated, every session attains its maximum achievable packet rate. However, since in general sessions use multiple network links, on a given link a session can in fact have a lower packet rate than other sessions that use that particular link. That results from the fact that the packet rate of the session can be reduced even more due to congestion on some other link. Thus, condition (35) must actually be replaced with the following one: That condition can be interpreted as follows. For any session the session’s packet rate attempts to approach the maximum achievable packet rate . However, on any link , that is, used by session , the value of cannot exceed the maximal packet rate , that is, attained by the sessions that use that particular link. Thus, the session’s packet rate can only attain the minimal of those rates unless that minimal rate is still higher than , in that case the packet rate of just approaches .

Considering conditions (34) and (36), it can now be seen that the flow control mechanism of the TCP protocol maximizes the vector of the packet rates of individual sessions in a fair way. Consider The max-min fairness property of the packet rates vector means that the packet rates of the data sessions are increased up to their maximum values unless links become overloaded, and in the case of a link overload, the data sessions on the link decrease their rate to the common highest feasible value. This type of behaviour appears to have far reaching consequences for solving the problem of packet network design that carry elastic traffic when the aim of the design is controlling the quality of services when the capacity of links changes [110].

3.6. Content Distribution Networks

Bandwidth allocation for content distribution through networks composed of multiple tree topologies with directed links and a server at the root of each tree is another problem of fair network optimization [111, 112] and [5, Ch. 6]. Content distribution over networks has become increasingly popular. It may be related, for instance, to a video-on-demand application where multiple programs can be broadcasted from each server. Each server broadcasts along a tree topology, where these trees may share links and each link has a limited bandwidth capacity. Figure 7 presents a network with two trees and servers at the root nodes 1 and 2. The server at node 1 can broadcast programs 1, 2, and 3 and the server at node 2 can broadcast programs 4, 5, and 6. The numbers adjacent to the links are the link capacities and the numbers adjacent to the nodes are the programs requested; for example, links (1, 3) have a capacity of 100 Gbs and programs 2, 3, and 5 are requested at node 7.

These models are fundamentally different from multicommodity network flow models since they do not have flow conservation constraints as each link carries at most one copy of a program. On the other hand, the models have treelike ordering constraints for each program as the allocated bandwidth for a given program cannot be increased when moving farther away from the broadcasting server. For each requested program at any node there is associated a performance function that represents satisfaction from the video-on-demand service and depends on the bandwidth available for that program on the incoming link to the node. Fair optimization with respect to all nodes and programs requested performance values is needed. In [111] the MMF model is introduced and a lexicographic max-min algorithm is presented. As shown in [113] the algorithm can be implemented in a distributed mode where most of the computations are done independently and in parallel at all nodes, while some information is exchanged among the nodes. More complex content distribution models and corresponding algorithms are discussed in [114116].

3.7. Fairness Issues in the IP Traffic

In its beginnings, the Internet suffered from severe deficiencies due to congestion. The answer came from new features added to TCP, namely, employing control admission and additive increase/multiplicative decrease algorithms that led to congestion avoidance and fair rate allocation. The main idea behind was putting the control traffic mechanisms at the end-nodes and combining both packet scheduling with admission control which will lead to fair bandwidth sharing. Plenty of studies have been done on the behavior of the network when such congestion avoidance algorithms are employed. They have shown that this leads to some kind of max-min fair sharing in very simplified networks [117] and to proportional fairness for large networks ([67] etc). This difference is mainly due to end-to-end delays which can be significantly different in large scale networks. At this point, an important topic is how to get close to maximal throughput while keeping a high level of fairness. In [118] there are investigated the performances of networks handling elastic flows (in contrast to stream flows they adjust their rate to the available bandwidth). It is shown that in linear networks under random traffic patterns, ensuring max-min fairness results in better throughput performances comparing to proportional fairness while the converse holds for persistent flows. All these works are situated at the session level and refer to traffic demand as the product of the flow arrival rate with the average flow size. At this stage, a more global solution would come by combining session level decisions (see Section 3.5) with higher level decisions as routing and load balancing. Hence, relations of rate adaptation and congestion control in TCP networks with routing and network design have been the subject of several works over the last decade. Among them, some work has been devoted to the static routing case (connections and corresponding routing paths are given) where source rates are subject to changes. In [12] there is presented the water-filling algorithm for achieving a MMF distribution of resources to connections for the fixed single path routing case (where each connection is associated with a particular fixed path). The main idea behind the algorithm is to uniformly increase the individual allocations of connections until one or more link becomes congested. Then the connections that cannot be improved are removed from the network together with the capacities they occupy; the process continues until all connections are removed. In [119], the problem of MMF bandwidth-sharing among elastic traffic connections when routing is not fixed has been considered in an offline context. The proposed iterative algorithm can be seen as an extension of the water-filling algorithm given in [12] except that the routing is not fixed and at each iteration a new routing is computed while the previously saturated links and the corresponding fair sharing remain fixed until the end of the algorithm.

Load balancing is in a way a problem dual to MMF routing (see TAP/MMF in Section 3.2) as one focuses on min-max fair load sharing instead of max-min fair bandwidth allocation to demands. Achieving load-balancing in a given network consists in distributing the demand traffic (load) fairly among the network links while satisfying a given set of traffic constraints. Fair load sharing means that not only the maximal load among links is minimized, but rather that the sorted (in the nonincreasing order) vector of link loads is minimized lexicographically like in formulation (28a)–(28d). In contrast to the max-min fair routing problem like TAP/MMF, the link load-balancing problem assures fairness in the min-max sense. The problem arises in communication networks when the operator needs to define routing with respect to a given traffic demand matrix such that the network load is fairly distributed among the network links. The problem can be easily modeled and solved by conventional methods in LP using MMF properties for linear link loads. This approach can be applied to more general link load functions (especially nonlinear, frequently used in telecommunications). In practice the load/delay functions considered by network operators are usually nonlinear. A well-known load/delay function, called the Kleinrock function, is given by . It can be shown that any routing achieving min-max fairness for the relative load function (i.e., ) achieves also min-max fair load for the Kleinrock function. This idea is generalized for general link load functions as and , where and give, respectively, the flow and capacity on link and is a given constant, see [120] for further details.

The problem of fairness is more complex when dealing with wireless networks and has been addressed in a number of papers during the last decade. A range of problems can be distinguished depending on the network characteristics, from wireless mesh networks, ad-hoc networks, sensor networks, random-access networks, opportunistic ones, and so forth. As for conventional wired networks, a fundamental problem in wireless networks is to estimate their throughput capacity and then to develop protocols to utilize the network close to this capacity without causing congestion in the network and unfairness. Then the first idea that comes in mind to address the fairness problem in wireless networks is the classic approach to manage congestion inherited from wired networks. Then, nodes/flows will have preassigned fair shares and applying admission control would allow ensuring fair sharing. In wireless networks this cannot be applied because of interference, which constrains the set of links that can transmit simultaneously, while in ad-hoc networks nodes and routers mobility renders the problem even more complicated. In WSN (wireless sensor networks) the fairness problem becomes on one hand closely connected to fair data gathering, that is, serving the sources equitably, and on the other hand it is connected to ensuring aware energy consuming because of the reduced energy capacity of nodes in such networks. Then the main constraint that one has to deal with is the so-called MAC (medium access control) constraint. Let us recall briefly what MAC constraint is; we start by its definition. Two basic definitions can be distinguished: the protocol and the physical one. The protocol definition of interference assumes that two links, which are less than (generally is taken 2) hops away from each other, interfere potentially and cannot be scheduled in the same time slot. The indicated number of hops refers to the number of hops between the sender nodes of these links. On the other hand, the physical definition is based on the signal-to-interference and noise ratio (SINR) constraint where the transmission links that do not satisfy the SINR constraint cannot be scheduled simultaneously. Hence, this constraint leads to new connected problems namely synchronization and scheduling. Given the above, most of the work related to these strategies is dedicated to scheduling. The basic version of a time slot allocation problem aims to find a slot allocation for all nodes in the network with minimal number of slots such that hops neighboring nodes are not allocated to the same time slot. The respective optimization problem is the chromatic graph one which aims to minimize the number of colors for coloring the nodes such that two neighbor elements do not use the same color. The problem becomes more difficult if one desires to achieve fairness between connections or sources. All this yields the max-min fair scheduling. In [121] the authors consider scheduling policies for max-min fair allocation of bandwidth in wireless ad-hoc networks. They formalize the max-min fair objective under wireless scheduling constraints and propose a fair scheduling which assigns dynamic weights to the flows such that the weights depend on the congestion in the neighborhood and schedule the flows which constitute a maximum weighted matching. While in [122], the authors propose a quite different alternative. Their method is inspired from per-flow queuing in wired networks and consists of a probabilistic packet scheduling scheme achieving max-min fairness without changing the existing IEEE 802.11 medium access control protocol. When a wireless node is ready to send a packet, the packet scheduler of the node is likely to select the queue whose number of packets sent in a certain time is the smallest and when no packet is available, the transmission is delayed by a fixed duration. In [123] the authors investigate simple queuing models for random traffic and discuss their interest for both wired and wireless transmissions.

With respect to WSN, the rate allocation problem for data aggregation in wireless sensor networks can be posed with two objectives: the first is maximizing the minimum (max-min) lifetime of an aggregation cluster and the second achieving fairness among all data sources. The two objectives cannot be maximized simultaneously and an approach would be to solve recursively first the max-min lifetime for the aggregation cluster, and next the fairness problem. In [124] the authors use this approach and formulate the problem of maximizing fairness among all data sources under a given max-min lifetime, as a convex optimization problem. Next, they compute the optimal rate allocations iteratively by a lexicographic method. In a recent paper [125], the authors address the problem of scheduling MMF link transmissions in wireless sensor networks, jointly with transmission power assignment. Given a set of concurrently transmitting links, the considered optimization problem seeks for transmission power levels at the nodes so that the SINR values of active links satisfy the max-min fairness property. By guaranteeing a fair transmission medium (in terms of SINR), other network requirements, such as the scheduling length, the throughput (directly dependent on the number of concurrent links in a time slot), and the energy savings (no collisions and retransmissions), can be directly controlled.

4. Location and Allocation Problems

4.1. Inequality Measures

The spatial distribution of public goods and services is influenced by facility location decisions and the issue of equity (or fairness) is important in many location decisions. In particular, various public facilities (or public service delivery systems) like schools, libraries, and health-service centers, require some spatial equity while making location-allocation decisions [126, 127]

The generic discrete location problem may be stated as follows. There is given a set of clients (service recipients). Each client is represented by a specific point. There is also given a set of potential locations for the facilities and the number (or the maximal number) of facilities to be located is given (). This means discrete location problems or network location problems with possible locations restricted to some subsets of the network vertices [128]. The main decisions to be made can be described with the binary variables equal to 1 if location is to be used and equal to 0 otherwise. To meet the problem requirements, the decision variables have to satisfy the following constraints: where the equation is replaced with the inequality () if specifies the maximal number of facilities to be located. Further the allocation decisions are represented by the additional variables equal to 1 if location is used to service client and equal to 0 otherwise. The allocation variables have to satisfy the following constraints: In the capacitated location problem the capacities of the potential facilities are given which implies some additional constraints.

Let denote the distance between client and location (travel effort or other effect of allocation client to location ). For the standard uncapacitated location problem it is assumed that all the potential facilities provide the same type of service and each client is serviced by the nearest located facility. The individual objective functions then can be expressed in the linear form: These linear functions of the allocation variables are applicable for the uncapacitated as well as for the capacitated facility location problems. In the case of location of desirable facilities a smaller value of the individual objective function means a better effect (smaller distance). This remains valid for location of obnoxious facilities if the distance coefficients are replaced with their complements to some large number: , where for all and . Generally, replacing the distances with their utility values or so-called proximity measures, for example, [129]. Therefore, we can assume that each function is to be minimized as stated in the multiple criteria problem [130].

Further, some additional client weights are included into location model to represent the service demand (or clients importance). Integer weights can be interpreted as numbers of unweighted clients located at exactly the same place. The normalized client weights for rather than the original quantities . In the case of unweighted problem (all ), all the normalized weights are given as .

Note that constraints (38) take a very simple form of the binary knapsack problem with all the constraint coefficients equal to 1 [131]. Indeed, the location problem may be viewed as a resource allocation problem on network. It may be considered as capacities allocation to links from an artificial source to potential locations nodes while flows are routed from the source to all client nodes through the the potential location nodes [19, 20].

Equity is usually quantified with the so-called inequality measures to be minimized. Inequality measures were primarily studied in economics [57, 76]. The simplest inequality measures are based on the absolute measurement of the spread of outcomes. Variance is the most commonly used inequality measure of this type and it was also widely analyzed within various location models [132, 133]. However, many various measures have been proposed in the literature to gauge the level of equity in facility location alternatives [58], like the mean absolute difference also called the Gini’s mean difference [9, 59]. Consider or like the mean absolute deviation

In economics one usually considers relative inequality measures normalized by mean outcome. Among many inequality measures perhaps the most commonly accepted is the Gini index (Lorenz measure), a relative measure of the mean absolute difference, which has been also analyzed in the location context [134136]. One can easily notice that a direct minimization of typical inequality measures (especially relative ones) contradicts the minimization of individual outcomes. As noticed by Erkut [134], it is rather a common flaw of all the relative inequality measures that while moving away from the clients to be serviced one gets better values of the measure as the relative distances become closer to one-another. As an extreme, one may consider an unconstrained continuous (single-facility) location problem and find that the facility located at (or near) infinity will provide (almost) perfectly equal service (in fact, rather lack of service) to all the clients. Unfortunately, the same applies to all dispersion type inequality measures, including the upper semideviations. This can be illustrated by a simple example of location problem on network.

Example 3. Consider a single facility location on a (triangular) network of 3 nodes: two nodes and are close one to the other , and one remote node with (see Figure 8). Most of the demand is equally distributed in and . That means the normalized values of weights take values and with a very small positive value . While locating facility at node (or ) one gets distance 0 for demand, distance 2 for demand, and large distance 100 for only demand. However, and and in terms of MAD minimization it is beaten by remote location . Indeed, locating facility at node one gets distance 0 for only demand while getting large distance 100 for demand, thus, much worser than for . Nevertheless, and . Hence, for small values of : . Actually, for sufficiently small values (e.g., ) location is a global MAD minimizer on the entire network (when allowing location on edges in addition to the nodes).

For typical inequality measures a simplified bicriteria mean-equity model is computationally very attractive since both the criteria are well defined directly for the weighted location problem without necessity of its disaggregation but it may result in solutions which are inefficient. It turns out that, under the assumption of bounded trade-offs, the bicriteria mean-equity approaches for selected absolute inequality measures (maximum upper deviation, mean semideviation or mean absolute difference) comply with the rules of equitable (fair) optimization [9, 137]. In other words, several inequality measures can be combined with the mean itself into the optimization criteria generalizing the concept of the worst outcome and generating equitably consistent underachievement measures. Simple sufficient conditions for inequality measures to keep this consistency property have been introduced in [137].

This applies, in particular, to the mean absolute difference (41) generating a proper fair solution concept: for any . Similar result is valid for the mean absolute deviation (42) but not for the variance [24, 137].

4.2. Lexicographic Minimax and Ordered Medians

Although minimization of the inequality measures contradicts the minimization of individual outcomes, the inequality minimization itself can be consistently incorporated into locational models. The notion of equitable multiple criteria optimization [63] introduces the preference structure that complies with both the outcomes minimization and with the inequality minimization rules [57, 76]. The equitable optimization is well suited for the locational analysis [9, 137, 138]. The equitably (fair) efficiency models presented in Section 2.3 apply also to the minimized outcomes, as commonly considered in location-allocation problems. The equitable minimization can be modeled with the standard multiple criteria optimization applied to the cumulative ordered outcomes, expressing, respectively, the worst outcome, the total of the two worst outcomes, the total of the three worst outcomes, and so forth. However, in the case of minimization the worst outcome means the largest rather than the smallest. Hence, the corresponding model takes form where . The minimax, called the center solution concept, represents only the first criterion, while the total outcome criterion, called the median solution concept, is focused on the last criterion. Several cent-dian solution concepts combining these two criteria have been considered (see [139] and references therein). For unweighted location problems, a compromise solution concept was introduced by Slater [140] as the so-called -centrum where the sum of the largest distances is minimized. Consistently with typical distribution characteristics, The -centrum concept is restricted to unweighted problems. Although some weights are used to scale the specific distances [141] (which may be considered as a definition of distance dependent outcomes), the demand weights as defining the distribution of clients are not considered. Ogryczak and Zawadzki [142] introduced a parametric generalization of the -centrum concept applied to weighted problems by taking into account the portion of demand related to the largest outcomes (distances) rather than the specific number of worst outcomes. Namely, for a specified portion of demand the entire portion (quantile) of the largest outcomes is taken into account and their average is considered as the (worst) conditional -mean outcome. According to this definition the concept of conditional median is based on averaging restricted to the portion of the worst outcomes. For the unweighted location problems and , the conditional -mean represents the average of the largest outcomes, thus, modeling the -centrum solution concept.

However, in order to guarantee the equitable efficiency of a selected location pattern one needs to take into account all the ordered outcomes (all the criteria in (44). The entire multiple criteria ordered model is rich with various equitably efficient solution concepts [64, 142, 143]. For the weighted sum aggregation one gets the OWA aggregation (18), called the ordered median solution concept [144]. If the OWA weights are strictly increasing and positive, that is , then each optimal solution of the OWA problem (18) is an equitably (fairly) efficient location pattern. Although the cumulated ordered outcomes can be expressed with linear programming models [85], these approaches requires the disaggregation of location problem with the demand weights which usually dramatically increases the problem size.

When applying the lexicographic optimization to problem (44), one gets the lexicographic minimax solution concept, called also lexicographic center [42] as a lexicographic refinement of the center solution concept. The lexicographic minimax location may be converted to a lexicographic minimization objective by constructing counting functions that count, for each possible distinct outcome, the number of occurrences of the specified outcome. It is quite simple to construct such counting functions for the discrete location problem (see [42, 48] or [5, Ch. 7.2]).

The lexicographic maximin approach can be applied to various location problems. The sensor location problem is an extension of the equitable facility location problem [5, Ch. 7.3] and [145]. Consider a set of nodes that need to be monitored as protection against undesired intrusion and a set of nodes where sensors can be placed. Let be the subset of nodes in that can monitor node , and let be the number of available sensors that can be placed among nodes in . The protection level provided to node is represented by the number of sensors that monitor node . The sought after solution is the lexicographic maximin solution with respect to the number of sensors that protects nodes . Figure 9 presents a problem with four locations that need to be monitored () and four locations where sensors can be placed (). The links that connect the nodes represent subsets . Two sensors can be placed among the nodes in .

The formulation of this problem is as follows: In Figure 9, a unique optimal solution has sensors at nodes 1 and 5 implying that nodes 2, 3, and 5 are monitored by both sensors while node 4 is monitored by only one sensor. Note that, in general, there may be multiple optimal solution. This problem can be solved by constructing counting functions as described in Section 2.4. However, whereas for the equitable facility location problem [42] the counting function for each location is represented by a single constraint, here the representation of counting functions adds a large number of variables and constraints into the problem. Now, suppose that the probability of detecting an intruder at node from a sensor at node is for . Then the protection level provided to is the probability that an intruder will be detected at node by at least one sensor from among those placed in the set . Although the formulation of this case is similar to the formulation above, the number of possible distinct outcomes can be much larger. As discussed in Section 2.4, this would necessitate employing a different solution method that is not based on counting functions (see [145]).

5. Complexity Issues

Essentially, fair optimization models are based on concave piecewise linear criteria possibly replacing a linear criterion of the total output maximization. Such criteria, implementable with auxiliary linear inequalities, in most cases do not significantly affect the complexity of the original optimization problems. In particular, problems represented by linear programming remain linear programs in their fair optimization versions (single LP problem in the case of fair OWA aggregations, or a sequence of LP problems for the MMF models). Certainly, some specialized algorithms taking into account the structure of the problem in hand can be more efficient than the general linear programming techniques. Consider resource allocation problem (23a)–(23d) in Section 2.4 which have a lexicographic maximin objective. For certain performance functions (e.g., linear and exponential functions) a lexicographic maximin solution is obtained by manipulations of closed-form expressions in a polynomial time. As presented in [5, Ch. 3.3], depending on the algorithm employed, the computational effort for solving (23a)–(23d) is or where is the number of activities in the set and is the number of resources in the set . Moreover, the same complexity is achieved for some content distribution problems in tree networks described in Section 3.6. With respect to communication networks applications, a well-known example of such a specialized algorithm is the already mentioned water-filling algorithm (see Section 3.2). Another example is a special case of the single source traffic allocation problem (also see Section 3.2), for which Megiddo [146] introduced a polynomial-time MMF algorithm which applies to splittable (fractional) flows. As presented in Section 2.4, there exist simple polynomial time techniques for solving general convex MMF problems. Thus, when applied to networks problems, the algorithms do not depend on any specific traffic routing problem formulation and is sufficiently general to be applied to a broad class of traffic routing and capacity allocation problems.

Generally, MMF optimization problems on convex attainable sets are characterized by polynomial complexity [92]. Polynomial algorithms may be developed for various specific forms of load balancing problems. For instance, in [147] a polynomial algorithm to determine the MMF optimal bandwidth allocation in order to satisfy the communication needs between two private networks. The algorithm is guaranteed to converge in finite number of steps, and for linear costs its complexity is .

Nonconvex attainable sets usually results in -hard complexity of the corresponding fair optimization problems. In the network environment this is the case of single-path flows (unsplittable flows). In particular, a single-source multiple-sink demand MMF optimization of single-path flows in a directed network was proven -hard in [148]. Nilsson [149] generalized this result showing that general MMF unsplittable-flow problems on undirected networks are -hard. This applies to the case when each demand may use any path as well as to the case when each demand may use one path from a predefined list. Actually, it is proven there that in both cases obtaining just the first entry of the sorted allocation vector (the standard maximin) is -hard in itself. Observe that this shows that all corresponding fairness optimization models are -hard as they must take into account that criterion. Single-path optimization problems remain -hard also when fairness is implemented as a constraint rather than a criterion. Amaldi et al. [150] showed that the Max-Throughput Single-Path Network Routing subject to MMF flow allocation is -hard even with equal (unit) capacities for all links. Nilsson [149] has also shown than nonconvexity introduced by modular flows (granular) causes that even splittable traffic allocation problems become -hard. Therefore, there is an emerging need to develop approximate or heuristic algorithms for such problems. Early results in this area show that several communication network problems with PF or OWA fairness criteria can be effectively handled by meta-heuristic approaches [80, 106, 151].

In location and allocation problems the general fairness (equitable) models may be viewed as the so-called ordered median solution concepts, corresponding to the OWA criterion with monotonic weights. Such a criterion may be implemented with simple auxiliary linear inequalities. Nevertheless, even standard (median or center) multifacility location problems on general networks are usually -hard and the same remains valid for the ordered median problems. For tree networks, however, polynomial time algorithms exist. Dynamic programming algorithm for the ordered median problem presented in [152] has time complexity of for the general problem, and just for the node restricted problem. Polynomial algorithms exist also for the single facility location ordered median problems [153] with complexity for trees and for general networks.

In this survey we have not discussed in detail fair optimization in connection to problems which can be related to abstract networks or analyzed with some networks. An important wide group of such problems is related to job-shop scheduling. Most approaches for the job-shop scheduling problem deal with the makespan criterion, that is, the maximum completion time of all jobs. Still, there are various criteria that consider the due dates of jobs, and aim at minimizing the tardiness of jobs or the fact that jobs are late, that is, not completed before their due dates. Actually, simple aggregations of a number of such uniform criteria are commonly applied. Each single criterion applies to one scheduling object like job or affected agent, and a need for aggregations providing fairness arises. Note that any fair aggregation is strictly increasing, thus, satisfying the condition of the so-called regular scheduling criterion (i.e., it is an increasing function of the completion times of the jobs, i.e., it is always optimal to start and complete jobs as early as possible). The job-shop scheduling problems with regular criteria are well studied. For the nonpreemptive two machine job-shop scheduling problem with a fixed number of jobs any regular criterion can be solved in polynomial time [154]. Generally, the -job -machine job-shop problem belongs to the class of -hard problems [155157] though there are exceptions for specific problems. Nevertheless, generic efficient approaches are available for approximate solving the job-shop scheduling problem with regular criteria [158]. Importance of fairness issues has been recently recognized in just-in time sequencing problems [13] in apportionment concepts [76, 159]. Very few fair optimization approaches have been presented to job-shop scheduling, although already in 1989 such approaches were considered in [160]. Specifically, a lexicographic minimax objective was analyzed for the production smoothness of multiple feeder shops that produce components for custom-made products assembled at a final assembly shop. Finally, we mention that the fair dominance rules were used in a multiobjective method to solve reentrant hybrid flow shop scheduling problem [161].

6. Concluding Remarks

In systems which serve many entities there is a need to respect some fairness rules. Extensive progress in fair optimization methodology, made in the last three decades, resulted in a variety of techniques enabling to generate fair and efficient solutions. In particular, allocation problems related to communication networks and location-allocation problems are the areas where the fair optimization concepts are extensively developed and widely applied. Within the networking applications the lexicographic maximin approach (or the related max-min fairness approach) is the most widely used. The recent book by Luss [5] exhibits a variety of models with a lexicographic maximin objective and the corresponding algorithms in the context of resource allocation. Many of these models apply to communication network and location problems. Since this approach may lead to significant losses in the overall efficiency (e.g., throughput of the network), the proportional fairness or other utility based approaches (like -fairness) are also applied. In location-allocation problems the fairness understood as equity is usually quantified with inequality measures to be minimized or treated with minimax optimization, called center solution concept. The latter is applied especially for emergency facilities location and recently is considered with a lexicographically regularized criterion to lexicographic minimax. The inequality measures are scalar indices based on some measurement of the spread of outcomes. Direct minimization of the inequality measures contradicts the optimization of individual outcomes, but several inequality measures can be combined with the mean outcome into the equitable criteria, thus, allowing to generate various fair solutions.

A wide variety of fair optimization models and algorithms supporting efficient and fair allocation in complex systems has been introduced and studied in the literature. In most cases, they can be effectively used to generate various fair allocation schemes while taking into account the problem specificities. Nevertheless, problems with discrete structure lead to massive computations questioning possibility to achieve any fair solution in a reasonable time. Therefore, there is a need to develop approximate or heuristic algorithms for such problems.

Frequently, one may be interested in putting into allocation models some additional service importance weights. The importance weights are easily incorporated into the scalar inequality measures [59, 137, 162] or the Jain fairness index [163] as well as in proportional fairness [7]. There are also possibilities to introduce importance weights into the general fair preferences [164] and fair optimization models. In particular, the OWA aggregations (18) may be extended to the corresponding Weighted OWA (WOWA) aggregations [165, 166] which still remain LP computable [167, 168], while metaheuristic may be also applied [169]. The performance functions in a lexicographic minimax objective function may also include demand weights, cf. [31, 43, 170], and [5, Ch. 1].

Vector fair optimization approaches taking into account multiattribute outcomes are still underexplored. In resource allocation context this relates to problems with multiple types of resources where the users request different ratios of different resources. A typical example is datacenters processing jobs with heterogeneous resource requirements on CPU, memory, network bandwidth, and so forth. Recently proposed (vector) fairness measure [171], called dominant resource fairness, allocates resources according to max-min fairness on dominant resource shares. Köppen et al. [172] have extended the Jain fairness index [61] to the multiattribute case. By means of a leximin procedure, an allocation can be selected where the smallest among the Jain fairness indexes takes the largest value. This extends the notion of an allocation where fairness is achieved only for a single allocation metric. A unifying framework addressing the fairness-efficiency tradeoff in the light of multiple types of resources has been developed in [173].

Another still underexplored area of fair network optimization is related to distributed optimization process and related models [174]. In some equitable optimization problems, as shown in [113], the optimization algorithm can be implemented in a distributed mode where most of the computations are done independently and in parallel at the nodes. However, in most cases the distributed approaches to fairness must be based on game theory rather than on direct optimization [175177].

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

Research conducted by W. Ogryczak, M. Pióro, and A. Tomaszewski was supported by the National Science Centre (Poland) under Grant 2011/01/B/ST7/02967 “Integer programming models for joint optimization of link capacity assignment, transmission scheduling, and routing in fair multicommodity flow networks.”