Research Article  Open Access
Masatoshi Sakawa, Kosuke Kato, "An Interactive Fuzzy Satisficing Method for Multiobjective Nonlinear Integer Programming Problems with BlockAngular Structures through Genetic Algorithms with Decomposition Procedures", Advances in Operations Research, vol. 2009, Article ID 372548, 17 pages, 2009. https://doi.org/10.1155/2009/372548
An Interactive Fuzzy Satisficing Method for Multiobjective Nonlinear Integer Programming Problems with BlockAngular Structures through Genetic Algorithms with Decomposition Procedures
Abstract
We focus on multiobjective nonlinear integer programming problems with blockangular structures which are often seen as a mathematical model of largescale discrete systems optimization. By considering the vague nature of the decision maker's judgments, fuzzy goals of the decision maker are introduced, and the problem is interpreted as maximizing an overall degree of satisfaction with the multiple fuzzy goals. For deriving a satisficing solution for the decision maker, we develop an interactive fuzzy satisficing method. Realizing the blockangular structures that can be exploited in solving problems, we also propose genetic algorithms with decomposition procedures. Illustrative numerical examples are provided to demonstrate the feasibility and efficiency of the proposed method.
1. Introduction
Genetic algorithms (GAs) [1], initiated by Holland, his colleagues, and his students at the University of Michigan in the 1970s, as stochastic search techniques based on the mechanism of natural selection and natural genetics, have received a great deal of attention regarding their potential as optimization techniques for solving discrete optimization problems or other hard optimization problems. Although genetic algorithms were not much known at the beginning, after the publication of Goldberg's book [2], genetic algorithms have recently attracted considerable attention in a number of fields as a methodology for optimization, adaptation, and learning. As we look at recent applications of genetic algorithms to optimization problems, especially to various kinds of singleobjective discrete optimization problems and/or to other hard optimization problems, we can see continuing advances [3–13].
Sakawa et al. proposed genetic algorithms with double strings (GADS) [14] for obtaining an approximate optimal solution to multiobjective multidimensional 01 knapsack problems. They also proposed genetic algorithms with double strings based on reference solution updating (GADSRSU) [15] for multiobjective general 01 programming problems involving both positive coefficients and negative ones. Furthermore, they proposed genetic algorithms with double strings using linear programming relaxation (GADSLPR) [16] for multiobjective multidimensional integer knapsack problems and genetic algorithms with double strings using linear programming relaxation based on reference solution updating (GADSLPRRSU) for linear integer programming problems [17]. Observing that some solution methods for specialized types of nonlinear integer programming problems have been proposed [18–23], as an approximate solution method for general nonlinear integer programming problems, Sakawa et al. [24] proposed genetic algorithms with double strings using continuous relaxation based on reference solution updating (GADSCRRSU).
In general, however, actual decision making problems formulated as mathematical programming problems involve very large numbers of variables and constraints. Most of such largescale problems in the real world often have special structures that can be exploited in solving problems. One familiar special structure is the blockangular structure to the constraints and several kinds of decomposition methods for linear and nonlinear programming problems with blockangular structure have been proposed [25]. Unfortunately, however, for largescale problems with discrete variables, it seems quite difficult to develop an efficient solution method for obtaining an exact optimal solution. For multidimensional 01 knapsack problems with blockangular structures, by utilizing the blockangular structures that can be exploited in solving problems, Sakawa et al. [9, 26] proposed genetic algorithms with decomposition procedures (GADPs). For dealing with multidimensional 01 knapsack problems with block angular structures, using triple string representation, Sakawa et al. [9, 26] presented genetic algorithms with decomposition procedures. Furthermore, by incorporating the fuzzy goals of the decision maker, they [9] also proposed an interactive fuzzy satisficing method for multiobjective multidimensional 01 knapsack problems with block angular structures.
Under these circumstances, in this paper, as a typical mathematical model of largescale multiobjective discrete systems optimization, we consider multiobjective nonlinear integer programming problems with blockangular structures. By considering the vague nature of the decision maker's judgments, fuzzy goals of the decision maker are introduced, and the problem is interpreted as maximizing an overall degree of satisfaction with the multiple fuzzy goals. For deriving a satisficing solution for the decision maker, we develop an interactive fuzzy satisficing method. Realizing the blockangular structures that can be exploited in solving problems, we also propose genetic algorithms with decomposition procedures for nonlinear integer programming problems with blockangular structures.
The paper is organized as follows. Section 2 formulates multiobjective nonlinear integer programming problems with blockangular structures. Section 3 develops an interactive fuzzy satisficing method for deriving a satisficing solution for the decision maker. Section 4 proposes GADPCRRSU as an approximate solution method for nonlinear integer programming problems with blockangular structures. Section 5 provides illustrative numerical examples to demonstrate the feasibility and efficiency of the proposed method. Finally the conclusions are considered in Section 6 and the references.
2. Problem Formulation
Consider multiobjective nonlinear integer programming problems with blockangular structures formulated as
where , , are dimensional integer decision variable column vectors and . The constraints are called as coupling constraints with dimension, while each of constraints , is called as block constraints with dimension. In (2.1), it is assumed that , , are general nonlinear functions. The positive integers , , represent upper bounds for . In the following, for notational convenience, the feasible region of (2.1) is denoted by .
As an example of nonlinear integer programming problems with blockangular structures in practical applications, Bretthauer et al. [27] formulated health care capacity planning, resource constrained production planning, and portfolio optimization with industry constraints.
3. An Interactive Fuzzy Satisficing Method
In order to consider the vague nature of the decision maker's judgments for each objective function in (2.1), if we introduce the fuzzy goals such as “ should be substantially less than or equal to a certain value," (2.1) can be rewritten as where is the membership function to quantify the fuzzy goal for the th objective function in (2.1). To be more specific, if the decision maker feels that should be less than or equal to at least and is satisfactory, the shape of a typical membership function is shown in Figure 1.
Since (3.1) is regarded as a fuzzy multiobjective optimization problem, a complete optimal solution that simultaneously minimizes all of the multiple objective functions does not always exist when the objective functions conflict with each other. Thus, instead of a complete optimal solution, as a natural extension of the Pareto optimality concept for ordinary multiobjective programming problems, Sakawa et al. [28, 29] introduced the concept of MPareto optimal solutions which is defined in terms of membership functions instead of objective functions, where M refers to membership.
Definition 3.1 (MPareto optimality). A feasible solution is said to be MPareto optimal to a fuzzy multiobjective optimization problem if and only if there does not exist another feasible solution such as , and for at least one .
Introducing an aggregation function for membership functions in (3.1), the problem can be rewritten as where the aggregation function represents the degree of satisfaction or preference of the decision maker for the whole of fuzzy goals. In the conventional fuzzy approaches, it has been implicitly assumed that the minimum operator is the proper representation of the decision maker's fuzzy preferences. However, it should be emphasized here that this approach is preferable only when the decision maker feels that the minimum operator is appropriate. In other words, in general decision situations, the decision maker does not always use the minimum operator when combining the fuzzy goals and/or constraints. Probably the most crucial problem in (3.2) is the identification of an appropriate aggregation function which well represents the decision maker's fuzzy preferences. If can be explicitly identified, then (3.2) reduces to a standard mathematical programming problem. However, this rarely happens, and as an alternative, an interaction with the decision maker is necessary to find a satisficing solution for (3.1).
In order to generate candidates of a satisficing solution which are MPareto optimal, the decision maker is asked to specify the aspiration levels of achievement for all membership functions, called reference membership levels. For reference membership levels given by the decision maker , , the corresponding MPareto optimal solution to , which is the nearest to the requirements in the minimax sense or better than that if the reference membership levels are attainable, is obtained by solving the following augmented minimax problem: where is a sufficiently small positive real number.
We can now construct an interactive algorithm in order to derive a satisficing solution for the decision maker from among the MPareto optimal solution set. The procedure of the interactive fuzzy satisficing method is summarized as follows.
3.1. An Interactive Fuzzy Satisficing Method
Step 1. Calculate the individual minimum and maximum of each objective function under the given constraints by solving the following problems:
Step 2. By considering the individual minimum and maximum of each objective function, the decision maker subjectively specifies membership functions , to quantify fuzzy goals for objective functions.
Step 3. The decision maker sets initial reference membership levels , .
Step 4. For the current reference membership levels, solve the augmented minimax problem (3.3) to obtain the MPareto optimal solution and the membership function value.
Step 5. If the decision maker is satisfied with the current levels of the MPareto optimal solution, stop. Then the current MPareto optimal solution is the satisficing solution of the decision maker. Otherwise, ask the decision maker to update the current reference membership levels , by considering the current values of the membership functions and return to Step 4.
In the interactive fuzzy satisficing method, it is required to solve nonlinear integer programming problems with blockangular structures (3.3) together with (3.4). It is significant to note that these problems are single objective integer programming problems with blockangular structures. Realizing this difficulty, in the next section, we propose genetic algorithms with decomposition procedures using continuous relaxation based on reference solution updating (GADPCRRSU).
4. Genetic Algorithms with Decomposition Procedures
As discussed above, in this section, we propose genetic algorithms with decomposition procedures using continuous relaxation based on reference solution updating (GADPCRRSU) as an approximate solution method for nonlinear integer programming problems with blockangular structures.
Consider singleobjective nonlinear integer programming problems with blockangular structures formulated as
Observe that this problem can be viewed as a singleobjective version of the original problem (2.1).
Sakawa et al. [24] have already studied genetic algorithms with double strings using continuous relaxation based on reference solution updating (GADSCRRSU) for ordinary nonlinear integer programming problems formulated as where an individual is represented by a double string. In a double string as is shown in Figure 2, for a certain , represents an index of a variable in the solution space, while , does the value among of the th variable .
In view of the blockangular structure of (4.1), it seems to be quite reasonable to define an individual as an aggregation of subindividuals , , corresponding to the block constraint as shown in Figure 3.
If these subindividuals are represented by double strings, for each of subindividuals , , a phenotype (subsolution) satisfying each of the block constraints can be obtained by the decoding algorithm in GADSCRRSU.
Unfortunately, however, the simple combination of these subsolutions does not always satisfy the coupling constraints . To cope with this problem, a triple string representation as shown in Figure 4 and the corresponding decoding algorithm are presented as an extension of the double string representation and the corresponding decoding algorithm. By using the proposed representation and decoding algorithm, a phenotype (solution) satisfying both the block constraints and coupling constraints can be obtained for each individual .
To be more specific, in a triple string which represents a subindividual corresponding to the th block, represents the priority of the th block, each is an index of a variable in phenotype and each takes an integer value among . As in GADSCRRSU, a feasible solution, called a reference solution, is necessary for decoding of triple strings. In our proposed GADPCRRSU, the reference solution is obtained as a solution to a minimization problem of constraint violation. In the following, we summarize the decoding algorithm for triple strings using a reference solution , where is the number of individuals and is a counter for the individual number.
4.1. Decoding Algorithm for Triple String
Step 1. Let .
Step 2. If , go to Step 3. Otherwise, go to Step 11.
Step 3. Let , , .
Step 4. Find such that . Let , .
Step 5. Let .
Step 6. If and , let , , and go to Step 7. Otherwise, let and go to Step 7.
Step 7. If , let and go to Step 8. Otherwise, go to Step 5.
Step 8. If , go to Step 9. Otherwise, go to Step 4.
Step 9. If and , go to Step 11. Otherwise, go to Step 10.
Step 10. Find such that for . Then, let , . Furthermore, find such that and let , . The remainder elements of are set to . Terminate the decoding process.
Step 11. Let , and go to Step 12.
Step 12. Find such that and let .
Step 13. Let . If , let and go to Step 15. If , go to Step 14.
Step 14. If and , let and go to Step 15. Otherwise, let , and go to Step 15.
Step 15. If , go to Step 13. Otherwise, let and go to Step 16.
Step 16. If , go to Step 12. Otherwise, and go to Step 17.
Step 17. If , go to Step 2. Otherwise, terminate the decoding process.
It is expected that an optimal solution to the continuous relaxation problem becomes a good approximate optimal solution of the original nonlinear integer programming problem. In the proposed method, after obtaining an (approximate) optimal solution , , to the continuous relaxation problem, we suppose that each decision variable takes exactly or approximately the same value that does. In particular, decision variables such as are very likely to be equal to .
To be more specific, the information of the (approximate) optimal solution to the continuous relaxation problem of (4.1) is used when generating the initial population and performing mutation. In order to generate the initial population, when we determine the value of each in the lowest row of a triple string, we use a Gaussian random variable with mean and variance . In mutation, when we change the value of for some , we also use a Gaussian random variable with mean and variance .
Various kinds of reproduction methods have been proposed. Among them, Sakawa et al. [14] investigated the performance of each of six reproduction operators, that is, ranking selection, elitist ranking selection, expected value selection, elitist expected value selection, roulette wheel selection, and elitist roulette wheel selection, and as a result confirmed that elitist expected value selection is relatively efficient for multiobjective 01 programming problems incorporating the fuzzy goals of the decision maker. Thereby, the elitist expected value selection—elitism and expected value selection combined together—is adopted. Here, elitism and expected value selection are summarized as follows.
Elitism
If the fitness of an individual in the past populations is larger than that of every individual in the current population, preserve this string into the current generation.
Expected Value Selection
For a population consisting of individuals, the expected number of each , , each subindividual of the th individual , in the next population, is given by
Then, the integral part of denotes the definite number of preserved in the next population. While, using the decimal part of , the probability to preserve , , in the next population is determined by
If a singlepoint crossover or multipoint crossover is directly applied to upper or middle string of individuals of triple string type, the th element of the string of an offspring may take the same number that the th element takes. The same violation occurs in solving the traveling salesman problems or scheduling problems through genetic algorithms. In order to avoid this violation, a crossover method called partially matched crossover (PMX) is modified to be suitable for triple strings. PMX is applied as usual for upper strings, whereas, for a couple of middle string and lower string, PMX for double strings [14] is applied to every subindividual.
It is now appropriate to present the detailed procedures of the crossover method for triple strings.
4.2. Partially Matched Crossover (PMX) for Upper String
Let be the upper string of an individual and let be the upper string of another individual. Prepare copies and of and , respectively.
Step 1. Choose two crossover points at random on these strings, say, and .
Step 2. Set and repeat the following procedures. (a)Find such that . Then, interchange with and set . (b)If , stop and let be the offspring of . Otherwise, return to (a). Step is carried out for in the same manner, as shown in Figure 5.
4.3. Partially Matched Crossover (PMX) for Double String
Let be the middle and lower part of a subindividual in the th subpopulation, and
be the middle and lower parts of another subindividual in the th subpopulation. First, prepare copies and of and , respectively.
Step 1. Choose two crossover points at random on these strings, say, and .
Step 2. Set and repeat the following procedures. (a)Find such that . Then, interchange with and set . (b)If , stop. Otherwise, return to (a).
Step 3. Replace the part from to of with that of and let be the offspring of .
This procedure is carried out for and in the same manner, as shown in Figure 6.
It is considered that mutation plays the role of local random search in genetic algorithms. Only for the lower string of a triple string, mutation of bitreverse type is adopted and applied to every subindividual.
For the upper string and for the middle and lower string of the triple string, inversion defined by the following algorithm is adopted
Step 1. After determining two inversion points and , pick out the part of the string from to .
Step 2. Arrange the substring in reverse order.
Step 3. Put the arranged substring back in the string.
Figure 7 illustrates examples of mutation.
Now we are ready to introduce the genetic algorithm with decomposition procedures as an approximate solution method for nonlinear integer programming problems with block angular structures. The outline of procedures is shown in Figure 8.
4.4. Computational Procedures
Step 1. Set an iteration index (generation) and determine the parameter values for the population size , the probability of crossover , the probability of mutation , the probability of inversion , variances , , the minimal search generation and the maximal search generation .
Step 2. Generate individuals whose subindividuals are of triple string type at random.
Step 3. Evaluate each individual (subindividual) on the basis of phenotype obtained by the decoding algorithm and calculate the mean fitness and the maximal fitness of the population. If and , or, if , regard an individual with the maximal fitness as an optimal individual and terminate this program. Otherwise, set and proceed to Step 4.
Step 4. Apply the reproduction operator to all subpopulations , .
Step 5. Apply the PMX for double strings to the middle and lower part of every subindividual according to the probability of crossover .
Step 6. Apply the mutation operator of the bitreverse type to the lower part of every subindividual according to the probability of mutation and apply the inversion operator for the middle and lower parts of every subindividual according to the probability of inversion .
Step 7. Apply the PMX for upper strings according to .
Step 8. Apply the inversion operator for upper strings according to and return to Step 3.
It should be noted here that, in the algorithm, the operations in the Steps 4, 5, and 6 can be applied to every subindividual of all individuals independently. As a result, it is theoretically possible to reduce the amount of working memory needed to solve the problem and carry out parallel processing.
5. Numerical Examples
In order to demonstrate the feasibility and efficiency of the proposed method, consider the following multiobjective quadratic integer programming problem with blockangular structures:
For comparison, genetic algorithms with double strings using continuous relaxation based on reference solution updating (GADSCRRSU) [24] are also adopted. It is significant to note here that decomposition procedures are not involved in GADSCRRSU.
For this problem, we set , , , and , , , . Elements of and in objectives and constraints of the above problem are determined by uniform random number on and those of in constraints are determined so that the feasible region is not empty.
Numerical experiments are performed on a personal computer (CPU: Intel Celeron Processor, 900 MHz, Memory: 256 MB, C_Compiler: Microsoft Visual C++ 6.0).
Parameter values of GADPCRRSU are set as: population size , crossover rate , mutation rate , inversion rate , variances , , minimal search generation number and maximal search generation number .
In this numerical example, for the sake of simplicity, the linear membership function is adopted, and the parameter values are determined as [30] For the initial reference levels , the augmented minimax problem (3.3) is solved. The obtained solutions are shown at the second column in Table 1. Assume that the hypothetical decision maker is not satisfied with the current solution and he feels that and should be improved at the expense of . Then, the decision maker updates the reference membership levels to . The result for the updated reference membership levels is shown at the third column in Table 1. Since the decision maker is not satisfied with the current solution, he updates the reference membership levels to for obtaining better value of . A similar procedure continues in this way and, in this example, a satisficing solution for the decision maker is derived at the third interaction.

Table 1 shows that the proposed interactive method using GADPCRRSU with decomposition procedures can find an (approximate) optimal solution at each interaction in shorter time than that using GADSCRRSU without decomposition procedures.
Furthermore, in order to see how the computation time changes with the increased size of blockangular nonlinear integer programming problems, typical problems with 10, 20, 30, 40, and 50 variables are solved by GADPCRRSU and GADSCRRSU. As depicted in Figure 9, it can be seen that the computation time of the proposed GADPCRRSU increases almost linearly with the size of the problem while that of GADSCRRSU increases rapidly and nonlinearly.
6. Conclusions
In this paper, as a typical mathematical model of largescale discrete systems optimization, we considered multiobjective nonlinear integer programming with blockangular structures. Taking into account vagueness of judgments of the decision makers, fuzzy goals of the decision maker were introduced, and the problem was interpreted as maximizing an overall degree of satisfaction with the multiple fuzzy goals. An interactive fuzzy satisficing method was developed for deriving a satisficing solution for the decision maker. Realizing the blockangular structures that can be exploited, we also propose genetic algorithms with decomposition procedures for solving nonlinear integer programming problems with blockangular structures. Illustrative numerical examples were provided to demonstrate the feasibility and efficiency of the proposed method. Extensions to multiobjective twolevel integer programming problems with blockangular structures will be considered elsewhere. Also extensions to stochastic multiobjective twolevel integer programming problems with blockangular structures will be required in the near future.
References
 J. H. Holland, Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, University of Michigan Press, Ann Arbor, Mich, USA, 1975. View at: MathSciNet
 D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison Wesley, Reading, Mass, USA, 1989.
 Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Artificial Intelligence, Springer, Berlin, Germany, 1992. View at: MathSciNet
 Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer, Berlin, Germany, 2nd edition, 1994. View at: MathSciNet
 Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer, Berlin, Germany, 3rd edition, 1996.
 T. Bäck, Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms, The Clarendon Press, Oxford University Press, New York, NY, USA, 1996. View at: Zentralblatt MATH  MathSciNet
 T. Bäck, D. B. Fogel, and Z. Michalewicz, Handbook of Evolutionary Computation, Institute of Physics, Bristol, UK, 1997. View at: Zentralblatt MATH  MathSciNet
 K. Deb, MultiObjective Optimization Using Evolutionary Algorithms, WileyInterscience Series in Systems and Optimization, John Wiley & Sons, Chichester, UK, 2001. View at: MathSciNet
 M. Sakawa, Large Scale Interactive Fuzzy Multiobjective Programming, Physica, Heidelberg, Germany, 2000.
 M. Sakawa, Genetic Algorithms and Fuzzy Multiobjective Optimization, vol. 14 of Operations Research/Computer Science Interfaces Series, Kluwer Academic Publishers, Boston, Mass, USA, 2002. View at: MathSciNet
 C. A. C. Coello, D. A. Van Veldhuizen, and G. B. Lamont, Evolutionary Algorithms for Solving MultiObjective Problems, Kluwer Academic Publishers, New York, NY, USA, 2002.
 A. E. Eiben and J. E. Smith, Introduction to Evolutionary Computing, Natural Computing Series, Springer, Berlin, Germany, 2003. View at: MathSciNet
 Z. Hua and F. Huang, “A variablegrouping based genetic algorithm for largescale integer programming,” Information Sciences, vol. 176, no. 19, pp. 2869–2885, 2006. View at: Google Scholar  Zentralblatt MATH  MathSciNet
 M. Sakawa, K. Kato, H. Sunada, and T. Shibano, “Fuzzy programming for multiobjective 01 programming problems through revised genetic algorithms,” European Journal of Operational Research, vol. 97, pp. 149–158, 1997. View at: Google Scholar
 M. Sakawa, K. Kato, S. Ushiro, and K. Ooura, “Fuzzy programming for general multiobjective 01 programming problems through genetic algorithms with double strings,” in Proceedings of IEEE International Fuzzy Systems Conference, vol. 3, pp. 1522–1527, 1999. View at: Google Scholar
 M. Sakawa, K. Kato, T. Shibano, and K. Hirose, “Fuzzy multiobjective integer programs through genetic algorithms using double string representation and information about solutions of continuous relaxation problems,” in Proceedings of IEEE International Conference on Systems, Man and Cybernetics, pp. 967–972, 1999. View at: Google Scholar
 M. Sakawa and K. Kato, “Integer programming through genetic algorithms with double strings based on reference solution updating,” in Proceedings of IEEE International Conference on Industrial Electronics, Control and Instrumentation, pp. 2915–2920, 2000. View at: Google Scholar
 P. Hansen, “Quadratic zeroone programming by implicit enumeration,” in Numerical Methods for NonLinear Optimization (Conf., Univ. Dundee, Dundee, 1971), F. A. Lootsma, Ed., pp. 265–278, Academic Press, London, UK, 1972. View at: Google Scholar  Zentralblatt MATH  MathSciNet
 J. Li, “A bound heuristic algorithm for solving reliability redundancy optimization,” Microelectronics and Reliability, vol. 36, pp. 335–339, 1996. View at: Google Scholar
 D. Li, J. Wang, and X. L. Sun, “Computing exact solution to nonlinear integer programming: convergent Lagrangian and objective level cut method,” Journal of Global Optimization, vol. 39, no. 1, pp. 127–154, 2007. View at: Google Scholar  Zentralblatt MATH  MathSciNet
 R. H. Nickel, I. MikolicTorreira, and J. W. Tolle, “Computing aviation sparing policies: solving a large nonlinear integer program,” Computational Optimization and Applications, vol. 35, no. 1, pp. 109–126, 2006. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 M. S. Sabbagh, “A partial enumeration algorithm for pure nonlinear integer programming,” Applied Mathematical Modelling, vol. 32, no. 12, pp. 2560–2569, 2008. View at: Google Scholar  MathSciNet
 W. Zhu and H. Fan, “A discrete dynamic convexized method for nonlinear integer programming,” Journal of Computational and Applied Mathematics, vol. 223, no. 1, pp. 356–373, 2009. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 M. Sakawa, K. Kato, M. A. K. Azad, and R. Watanabe, “A genetic algorithm with double string for nonlinear integer programming problems,” in Proceedings of IEEE International Conference on Systems, Man and Cybernetics, pp. 3281–3286, 2005. View at: Google Scholar
 L. S. Lasdon, Optimization Theory for Large Systems, The Macmillian, New York, NY, USA, 1970. View at: MathSciNet
 K. Kato and M. Sakawa, “Genetic algorithms with decomposition procedures for multidimensional 01 knapsack problems with block angular structures,” IEEE Transactions on Systems, Man and Cybernetics, Part B, vol. 33, pp. 410–419, 2003. View at: Google Scholar
 K. M. Bretthauer, B. Shetty, and S. Syam, “A specially structured nonlinear integer resource allocation problem,” Naval Research Logistics, vol. 50, no. 7, pp. 770–792, 2003. View at: Google Scholar  Zentralblatt MATH  MathSciNet
 M. Sakawa, H. Yano, and T. Yumine, “An interactive fuzzy satisficing method for multiobjective linearprogramming problems and its application,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 17, no. 4, pp. 654–661, 1987. View at: Google Scholar  MathSciNet
 M. Sakawa, Fuzzy Sets and Interactive Multiobjective Optimization, Applied Information Technology, Plenum Press, New York, NY, USA, 1993. View at: MathSciNet
 H.J. Zimmermann, “Fuzzy programming and linear programming with several objective functions,” Fuzzy Sets and Systems, vol. 1, no. 1, pp. 45–55, 1978. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
Copyright
Copyright © 2009 Masatoshi Sakawa and Kosuke Kato. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.