Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2011, Article ID 218078, 11 pages
Research Article

A Heuristic Algorithm for Resource Allocation/Reallocation Problem

Department of Mathematics, School of Humanities & Sciences, SASTRA University, Tamil Nadu, Thanjavur 613401, India

Received 3 February 2011; Revised 16 June 2011; Accepted 19 July 2011

Academic Editor: Yuri Sotskov

Copyright © 2011 S. Raja Balachandar and K. Kannan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


This paper presents a 1-opt heuristic approach to solve resource allocation/reallocation problem which is known as 0/1 multichoice multidimensional knapsack problem (MMKP). The intercept matrix of the constraints is employed to find optimal or near-optimal solution of the MMKP. This heuristic approach is tested for 33 benchmark problems taken from OR library of sizes upto 7000, and the results have been compared with optimum solutions. Computational complexity is proved to be of solving heuristically MMKP using this approach. The performance of our heuristic is compared with the best state-of-art heuristic algorithms with respect to the quality of the solutions found. The encouraging results especially for relatively large-size test problems indicate that this heuristic approach can successfully be used for finding good solutions for highly constrained NP-hard problems.

1. Introduction

A cellular network is a mobile network in which resources are managed in cells. Each cell, an abounded area, is served by an antenna or base station. Cell size and shape depend on signal strength, the presence of obstacles to signal propagation, customer capacity, and cost constraints. Allocation/reallocation problem is formulated as 0/1 multichoice multidimensional knapsack problem (MMKP), which is NP-hard combinatorial optimization problem and detailed study of this problem is presented in [1]. Literally MMKP is defined as follows: Given a set of groups of variables, one tries to select the best variable in each group. Each variable in a group has a value in an objective function and consumes a certain amount of resources as well. The problem is to select the variables, subject to resource constraints, so that the objective function is maximized.

The 0/1 multichoice multidimensional knapsack problem may thus be mathematically formulated as follows: given groups of items to pack in some knapsack of capacity . Each item has a profit and a weight , and the problem is to choose one item from each group such that the profit sum is maximized without having the weight sum to exceed : All coefficients , , and are positive integers, and the classes are mutually disjoint. Totally there are variables, and they are divided into groups.

In the resource allocation/reallocation problem a group represents a terminal that needs to be reallocated. For terminals subjected to reallocation the constraints contain the power estimation constraints, and the constraints express that a channel cannot allocate more than one terminal from a particular cell at a time. Equation (1.3) expresses that the terminals must be reallocated to a channel. For terminals to be allocated the constraints contain the power estimation constraints, the constraints that express that a terminal cannot be allocated to more than one channel from a particular cell at time.

The other applications of MMKP are quality of service degradation model, utility model, multisession adaptive multimedia system, problem of allocation of resources on a packet network, and nursing personnel scheduling problem. MMKP has been solved by several algorithms, and they have been cited and compared as benchmarks many times in the literature.

This paper is organized as follows. A brief survey of various researchers’ works pertaining to this problem is elucidated in Section 2. The dominant principles of intercept matrix, dominance principle-based Heuristic (DPH) approach for solving MMKP, and computational complexity of DPH are explained in Section 3. We have furnished the results obtained by DPH for all the benchmark problems in Section 4. This section also includes the extensive comparative study of results of our heuristic with known optimum or best solutions of MMKP. Salient features of this algorithm are also enumerated in Section 4, and finally concluding remarks and future direction are also given in Section 5.

2. Previous Work

Depending on the nature of the solution, the existing algorithm for MMKP can be divided into two groups, namely, exact algorithm striving for exact solutions and heuristic algorithms producing near-optimal solution. The Exact algorithm includes branch and bound and Lagrange multipliers technique [2, 3].

Khan et al. [2, 4] presented an exact algorithm for the MMKP based on branch and bound with linear programming technique. A solution is explored in each iteration of this algorithm by picking the items of a particular group. All the possible alternative picks of a group are estimated by applying linear programming. The solution with highest estimated total value is selected for further exploration. A solution is termed as the optimal solution if it gives the highest total value among all the explored solutions and an item from each group is already picked. Khan et al. [4] had applied the concept of aggregate resource consumption [5] to pick a new candidate item in a group to solve the MMKP which is resulted in a heuristic named HEU. In HEU, a new item of group is selected for possible upgrade if it gives the highest change in earned value per unit change of aggregate resource consumption. Akbar et al. [6] presented a modified version of HUE, namely, M-HEU. In M-HEU, a preprocessing step to find a feasible solution and a postprocessing step to improve the total value of the solution with one upgrade followed by one or more downgrades were added.

Moser et al. [3] used the concept of graceful degradation from the most valuable items based on Lagrange multipliers to solve the MMKP. The selected highest valued item might be infeasible because of high resource consumption. That is why graceful degradation is done in multiple iterations towards selection of a feasible suboptional solution. Hifi et al. [7, 8] proposed two different approximate approaches. The first approach is a guided local search-based heuristic in which the trajectories of the solutions were oriented by increasing the cost function with a penalty term; it penalizes bad features of preciously visited solutions.

The second approach is a reactive local search where an explicit check for the repetition of configuration is added to the local search. The algorithm starts by an initial solution and improves by using a fast literature procedure. Later both deblocking and degrading procedure are introduced in order to (i) escape to local optimal and to (ii) introduce diversification in the search space. Finally, a memory list is applied in order to forbid the repetition of configuration. Recently Cherfi and Hifi [9, 10] have developed two hybrid algorithms, namely, HLB and AHLB, and compared the solutions with another procedure called column generation method (ALGO) [10]. HLB is the combination of local branching and column generation solution procedure, and AHLB is known as augmented HLB. These two algorithms are extended version of Hifi et al.’s [7, 8] previous work. The computational time of these algorithms was fixed from 300 to 1200 sec. The overall best solutions of these two algorithms are presented and compared with our heuristic in Section 4.

Drexl [11] presented a simulated annealing approach to solve a slightly different variant of MMKP (without choice constraints (1.3)), namely, the multidimensional knapsack problem (MDKP). Genetic algorithm approaches are not suitable to solve for real-time admission control, as they require long time to find a suboptional solution.

Parra-Hernández and Dimopoulos [12] proposed another heuristic HMMKP, called linear programming relaxations of the MDKP reduced from the MMKP problem. A PRAM model approximation algorithm was devised by Newton et al. [13] for solving the MMKP in time using operations. Shahriar et al. [14] proposed a multiprocessors-based heuristic algorithm (MP-HEU) for MMKP which runs time, where is the time requested by the algorithm using single processors, is the number of processors, and is a function of , the synchronization overhead. Sbihi [15] has presented a best first search exact algorithm for the MMKP. The main principle of the algorithm is twofold: (i) to generate an initial feasible solution as a starting lower bound and (ii) at different levels of the search tree to determine an intermediate upper bound obtained by solving an auxiliary problem and perform the strategy of fixing items during the exploration.

In this paper, we propose a 1-opt heuristic algorithm based on dominance principle of intercept matrix to solve MMKP. The dominance principle-based heuristic algorithm has been implemented successfully to solve 0-1 multiconstrained knapsack problem [16]. The main principle of the algorithm is twofold: (i) to generate an initial feasible solution as a starting lower bound and (ii) to improve the initial feasible solution to optimal or near optimal by using this heuristic iteratively.

3. Dominance Principle-Based Heuristic (DPH)

Many researchers have included all the possible constraints and variables in linear programming (LP) model of the real-life optimization problems, but some of the constraints and variables may not involve in the optimality and it may consume additional computational complexity. Such variables and constraints are known as redundant constraints and variables.

The preprocessing techniques are necessary to remove such redundant constraints and variables. Researchers [4, 1725] have proposed many algorithms for LP models; in particular, Paulraj et al. [26] used the intercept matrix of the constraints to identify redundant constraints prior to the start of the solution process in his heuristic approach. Here we used the intercept matrix to identify redundant variables (0 valued variable) as well as selected variables (1 valued variable) for solving MMKP, since MMKP is a well-known 0-1 integer programming problem and many variables have zero values.

The intercept matrix of the constraints (1.2) is used to identify the variables of value 1 and 0. The algorithm starts by selecting the lowest cost-valued item of each group as (initial) feasible solution (Step  1) and improves the objective value by using dominance principle of intercept matrix (Step  3 to Step  5). This sequence of operations is performed ten times.

The construction of the intercept matrix (by dividing values by coefficients of (1.2)) is explored in Step  3. The elements of intercept matrix are used to find and which are arranged in decreasing order, and the leading element is the dominant variable (Step  5). We use this dominant variable to improve the current feasible solution, and this procedure provides optimum or near-optimum solution of MMKP. The dominant principle focuses at the resource matrix with lower requirement come forward to maximize the profit. The intercept matrix of the constraints (1.2) plays a vital role in achieving the goal, in a heuristic manner. Step  4 is used to identify the 0-value variables, that is, redundant variables. The various stages of DPH are presented in Algorithm 1.

Algorithm 1: DPH algorithm for MMKP.

3.1. Example

Consider an MMKP with 3 groups, 8 items, and 2 resources, that is, , , and : (we convert two-dimensional problem into one-dimensional notation for our convenience), Initial feasible solution: by using Step  1 of the algorithm, we have found the initial feasible solution of MMKP (001, 01,010) and objective value is 29. Next we update this solution by using DPH algorithm iteratively (from Step  3 to Step  5). Table 1 illustrates the first iteration reports of DPH algorithm. This heuristic updates the solution vector and objective function value.

Table 1: First Iteration reports of DPH.

Iteration. Consider .
2nd item in group 1 dominates other items, and it improves the objective function value from 29 to 34. Thus, the new solution vector is (010, 01,010) and objective function value is 34.
Consider .
There is no dominated variable other than the 2nd item in group 2. In this iteration no feasible upgrade is possible.
Consider .
2nd item in group 3 dominates other items, but it is available already.

In this iteration no feasible upgrade is possible. Thus, the final solution vector is (010, 01,010) and corresponding objective function value is 34.

Iteration. For , and 3, there is no change in the objective function value. We terminate the iteration process and display the final solution, 45 which is optimum.

Theorem 3.1. DPH can be realized in time, polynomial in the number of groups (), item types (), constraints (), and number of iterations ().

Proof. The computational complexity of finding the heuristic solution of MMKP using DPH can be obtained as follows. For simplicity, let us assume that the number of variables per group is a constant (in case of different numbers of items per group, assume that is the maximum number of elements in a group) and there are resources and groups. It is easy to verify that the procedure for construction of intercept matrix takes operations. Step  3 and Step  4 take and operations, respectively. One iteration performs times to complete the groupwise selection. So the overall running time of the procedure DPH for one iteration can be deduced as follows: For iteration algorithm, the overall time complexity is .

4. Experimental Design

The heuristic was tested on 33 instances corresponding to two groups; the first group contains existing instances, namely, I01 to I13 (13 instances) which was tested by Khan [2], and the second group consists of 20 instances, namely, Ins01 to Ins20 which were randomly generated by Hifi et al. [8]. Our algorithm was coded in C++ and performed on Pentium IV 2.40 GHz computer with 512 MB memory, running under Windows XP Professional. For all the 33 instances DPH has reached the optimum/best/improved solutions. Table 2 illustrates the computational results of DPH, Cplex, ALGO, and HLB, AHLIB solutions with known optimum/best solutions [8] for the test instances (°indicates optimum solution and *indicates best solution). DPH reaches the best solution for the problem I07 in the first iteration itself, and the first iteration objective function values of this problem for to 100 are presented in Figure 1.

Table 2: Computational results of Cplex, ALGO, HLB, AHLB, and DPH with known optimum/best [8].
Figure 1: First iteration objective function value for I07.

The comparative study of DPH with other existing heuristic algorithms (HMS: Hifi et al. algorithm [7]; RLS, MRLSa, MRLSb: Reactive Local Search, Modified Reactive Local Search algorithm [8]; Cplex: Cplex Solver; KLMA: Khan et al.’s algorithm [4]; MOSER: Moser et al.’s algorithm [3]; Opt/best: optimum or best solutions [8]; ALGO: column generation method [10]; HLBCGSP: local branching and column generation [9]; AHLBCGSP: augmented hybrid procedure [9]) has been furnished in Tables 3(a) and 3(b) in terms of the number of optimal or best solutions, the average deviation from optimal/best solution obtained for group 1 problem, and the number of optimum/overall best solutions.

Table 3: (a) Summarized results for the solution quality of group 1 problem. (b) Summarized results forthe solution quality of group 2 problem.

The MOSER et al. [3] approach is heuristic and is based on Lagrange’s multiplier method. Khan et al. [4] use an iterative improvement procedure, namely, KLMA based on the concept of aggregate resources, which is presented in [5]. Both of these methods have failed to find the optimum/best solution for group-1 instances, but KLMA identifies the optimum solution for the problem I06. The average deviations of KLMA and MOSER [3] approaches from optimum/best solutions are 1.46% and 5.99%, respectively.

Hifi et al. [7] has developed two greedy approaches for MMKP, namely, constructive procedure (CP) and complementary constructive procedure (CCP). The detailed description of CP and CCP can be found in [9]. To compare the performance of DPH, we consider the best solution of Hifi et al.’s [7] algorithm referred to herein as HMS. The HMS determines the optimum solutions for I01 and I06 only, and the average deviation of HMS from the optimum/best is known to be 1.92%. Hifi et al. [8] presented two more algorithms for MMKP, namely, reactive local search (RLS) and modified reactive local search (MRLS). RLS approach improves the solution obtained by CCP. The core of the algorithm is mainly based on two strategies, namely, degrading and deblocking. The detailed procedures are available in [8]. RLS presents the optimum solutions for 4 instances, namely, I01, I02, I05, and I06. The average deviation is 1.07%. The modified versions of RLS are known as MRLSa and MRLSb. MRLSa has obtained 4 optimum solutions (I01, I02, I03, and I06), and average deviation is 0.91%. For MRLSb, the average deviation is zero and finds optimum/best solutions for all the test instances except I12; the deviation from the best solution is 0.002%. The best results of Cplex Solver are also compared with DPH; Cplex obtained 8 optimum/best solutions out of 13 instances (I01 to I06, I10, and I12), and the average deviation is 0.01%.

Cherfi and Hifi [9, 10] have developed three approaches for solving MMKP, namely, ALGO, HLB, and AHLB. The column generation method (ALGO) identified the optimum/best solution for all the instances of group-1 problems. It has reached overall best solution for 7 instances (I01 to I07). Cherfi and Hifi [9, 10] have used three time limits for HLB. We consider the best solution among the three time bounds. HLB has found optimum/best solution for all test instances in group  1 and obtained 7 overall best solutions (I01 to I07). AHLB has also been executed with two time limits, and the best results are considered for this comparative study. AHLB provides optimum/best solution for the entire test instances in group  1 and has got overall best for all the 13 instances. DPH was set to perform maximum 10 iterations and got the optimum/best solution for all the test instances. Since the working environment is different, we have not compared the running time of all the algorithms. DPH reaches high-quality solutions within 10 seconds, whereas HLB and AHLB needs more computational time to achieve this kind of high-quality solutions. ALGO, HLB, and AHLB were coded in C++ and run on an Ultra-Sparc 10. The maximum time requirement for this high-quality solution, ALGO, HLB, and AHLB is found to be 1200 sec.

For group-2 instances, the optimum solutions are not available in the literature because of hardness of the problem. The computational results of MRLSb, Cplex, ALGO, HLB, AHLB, and DPH are reported in Table 3(b). From the best solutions of Hifi et al. [8], we observe that ALGO, AHLB, and DPH reached the best for all the 20 instances, and the other algorithms, HLB, MRLSb, and Cplex, have reached 19, 18, and 6 instances of group  2, respectively. In the same time ALGO, HLB, AHLB, and DPH have given the improved solutions for some of the test instances which are superior to Hifi et al. [8]. In Table 3(b), we have also presented the number of overall best solutions for these test instances.

From Tables 3(a) and 3(b), we conclude that DPH is an efficient technique than the other algorithms. The reason is that DPH reached these solutions in 10 seconds (maximum) for all the test instances, where ALGO, HLB, and AHLB reached with more computational time [9, 10]. The time complexity of our algorithm is , where is the number of iterations (we fixed as 10).

4.1. Salient Features of DPH

This heuristic is used to reduce the search space to find the near-optimal solutions of the MMKP. The computational complexity is , and the space complexity is . It reaches the optimum or near-optimum point in iterations, where is the number of groups. Due to dominance principles, this heuristic identifies the zero-value variables instantaneously. DPH takes maximum CPU time of 10 seconds for large-size problem. From the comparison table we observe that our algorithm is the effective one.

5. Conclusion

In this paper, we presented the dominant principle-based heuristic approach for tackling the NP-hard 0/1 multichoice multidimensional knapsack problem. This approach has been tested on 33 state-of-art benchmark instances and has led to give optimal/best solutions for all the test instances given in the literature. The maximum computational time is 10 seconds, where the other recent algorithm requires 1200 seconds. This heuristic is with complexity, and it requires iterations to solve the MMKP. The experimental data show that the optimality/best achieved by this heuristic is always close to 100 percentages. The basic idea behind the proposed scheme may be explored to tackle other NP-hard problems.


The authors wish to express their gratitude to the referees for their careful reading of the manuscript and helpful suggestions.


  1. R. Parra-Hernández and N. Dimopoulos, “Channel resource allocation/reallocation in cellular communication and linear programming,” in Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, vol. 3, pp. 2983–2989, October 2003.
  2. S. Khan, Quality adaptation in a multi-session adaptive multimedia system: model and architecture, Ph.D. dissertation, University of Victoria, Victoria, Canada, 1998.
  3. M. Moser, D. P. Jokanovic, and N. Shiratori, “An algorithm for the multidimensional multichoice knapsack problem,” IEICE Trans Fundam Electron, vol. 80, no. 3, pp. 582–589, 1997. View at Google Scholar
  4. S. Khan, K. F. Li, E. G. Manning, and M. D. M. Akbar, “Solving the Knapsack problem for adaptive multimedia systems,” Studia Informatica, vol. 2, pp. 154–174, 2002. View at Google Scholar
  5. Y. Toyoda, “A simplified algorithm for obtaining approximate solutions to zero-one programming problems,” Management Science, vol. 21, no. 12, pp. 1417–1427, 1974/75. View at Google Scholar
  6. M. M. Akbar, E. G. Manning, G. C. Shoja, and S. Khan, “Heuristic solutions for the multiple-choice multi-dimension knapsack problem,” in the International Conference on Computional Science, V. N. Alexandrov, J. Dongarra, B. A. Juliano, R. S. Renner, C. Jeng, and K. Tan, Eds., Lecture Notes in Computer Science, pp. 659–668, San Francisco, Calif, USA, May 2001.
  7. M Hifi, M Micharfy, and A Sbihi, “Heuristic algorithms for the multiple-choice multi-dimensional knapsack problem,” Journal of the Operational Research Society, vol. 55, no. 12, pp. 1323–1332, 2004. View at Google Scholar
  8. M. Hifi, M. Michrafy, and A. Sbihi, “A reactive local search-based algorithm for the multiple-choice multi-dimensional knapsack problem,” Computational Optimization and Applications, vol. 33, no. 2-3, pp. 271–285, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  9. N. Cherfi and M. Hifi, “Hybrid algorithms for the multiple-choice multi-dimensional knapsack problem,” International Journal of Operational Research, vol. 5, no. 1, pp. 89–109, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  10. N. Cherfi and M. Hifi, “A column generation method for the multiple-choice multi-dimensional knapsack problem,” Computational Optimization and Applications, vol. 46, no. 1, pp. 51–73, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  11. A. Drexl, “A simulated annealing approach to the multiconstraint zero-one knapsack problem,” Computing, vol. 40, no. 1, pp. 1–8, 1988. View at Publisher · View at Google Scholar · View at MathSciNet
  12. R. Parra-Hernández and N. J. Dimopoulos, “A new heuristic for solving the multichoice multidimensional knapsack problem,” IEEE Transactions on Systems, Man, and Cybernetics Part A, vol. 35, no. 5, pp. 708–717, 2005. View at Publisher · View at Google Scholar
  13. M. A. H. Newton, M. W. H. Sadid, and M. M. Akbar, “A parallel heuristic algorithm for multiple-choice multidimensional knapsack problem,” in the International Conference on Computer and Information Technology, pp. 181–184, Dhaka, Bangladesh, December 2003.
  14. A. Z. M. Shahriar, M. M. Akbar, M. S. Rahman, and M. A. H. Newton, “A multiprocessor based heuristic for multi-dimensional multiple-choice knapsack problem,” Journal of Supercomputing, vol. 43, no. 3, pp. 257–280, 2008. View at Publisher · View at Google Scholar
  15. A. Sbihi, “A best first search exact algorithm for the multiple-choice multidimensional knapsack problem,” Journal of Combinatorial Optimization, vol. 13, no. 4, pp. 337–351, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  16. S. Raja Balachandar and K. Kannan, “A new polynomial time algorithm for 0-1 multiple knapsack problem based on dominant principles,” Applied Mathematics and Computation, vol. 202, no. 1, pp. 71–77, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  17. A. L. Brearley, G. Mitra, and H. P. Williams, “Analysis of mathematical programming problems prior to applying the simplex algorithm,” Mathematical Programming, vol. 8, pp. 54–83, 1975. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  18. J. Gondzio, “Presolve analysis of linear programs prior to applying an interior point method,” INFORMS Journal on Computing, vol. 9, no. 1, pp. 73–91, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  19. I. Ioslovich, “Robust reduction of a class of large-scale linear programs,” SIAM Journal on Optimization, vol. 12, no. 1, pp. 262–282, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  20. M. H. Karwan, V. Lotfi, J. Telgen, and S. Zionts, Redundancy in Mathematical Programming: A State-of-the-Art Survey, vol. 206 of Lecture Notes in Economics and Mathematical Systems, Springer, Berlin, Germany, 1983.
  21. T. H. Mattheiss, “An algorithm for determining irrelevant constraints and all verticles in systems of linear inequalities,” Operations Research, vol. 21, pp. 247–260, 1973. View at Publisher · View at Google Scholar
  22. Cs. Mészáros and U. H. Suhl, “Advanced preprocessing techniques for linear and quadratic programming,” Spectrum, vol. 25, no. 4, pp. 575–595, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  23. N. V. Stojković and P. S. Stanimirović, “Two direct methods in linear programming,” European Journal of Operational Research, vol. 131, no. 2, pp. 417–439, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  24. J. Telgen, “Identifying redundant constraints and implicit equalities in systems of linear constraints,” Management Science, vol. 29, no. 10, pp. 1209–1222, 1983. View at Google Scholar · View at Zentralblatt MATH
  25. J. A. Tomlin and J. S. Welch, “Finding duplicate rows in a linear programming model,” Operations Research Letters, vol. 5, no. 1, pp. 7–11, 1986. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  26. S. Paulraj, C. Chellappan, and T. R. Natesan, “A heuristic approach for identification of redundant constraints in linear programming models,” International Journal of Computer Mathematics, vol. 83, no. 8-9, pp. 675–683, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet