Abstract

In practical situations, solving a given problem usually calls for the systematic and simultaneous analysis of more than one objective function. Hence, a worthwhile research question may be posed thus: In multiobjective optimization, what can facilitate for the decision maker choosing the best weighting? Thus, this study attempts to propose a method that can identify the optimal weights involved in a multiobjective formulation. The proposed method uses functions of Entropy and Global Percentage Error as selection criteria of optimal weights. To demonstrate its applicability, this method was employed to optimize the machining process for vertical turning, maximizing the productivity and the life of cutting tool, and minimizing the cost, using as the decision variables feed rate and rotation of the cutting tool. The proposed optimization goals were achieved with feed rate = 0.37 mm/rev and rotation = 250 rpm. Thus, the main contributions of this study are the proposal of a structured method, differentiated in relation to the techniques found in the literature, of identifying optimal weights for multiobjective problems and the possibility of viewing the optimal result on the Pareto frontier of the problem. This viewing possibility is very relevant information for the more efficient management of processes.

1. Introduction

As companies today face fierce and constant competition, managers are under increasing pressure to reduce their costs and improve the quality standards of products and processes. As part of their response to this demand, managers have pursued rigorous methods of decision making, including optimization methods [1].

In many practical fields, such as engineering design, scientific computing, social economy, and network communication, there exist a large number of complex optimization problems [2] and, because of this, optimization techniques, in recent years, have evolved greatly, finding wide application in various types of industries. They are now capable, thanks to a new generation of powerful computers, of solving ever larger and more complex problems.

According to [1], optimization is the act, in any given circumstance, of obtaining the best result. In this context, the main purpose of decision making in industrial processes is to minimize the effort required to develop a specific task or to maximize the desired benefit. The effort required or the benefit desired in any practical situation can be expressed as a function of certain decision variables. This being the case, optimization can be defined as the process of finding the conditions that give the maximum or minimum value of a function [1]. This function is known as the objective function.

In practical situations, however, solving a given problem usually calls for the systematic and simultaneous analysis of more than one objective function giving rise to multiobjective optimization [3].

In multiobjective problems, it is very unlikely that all functions are minimized simultaneously by one optimal solution . Indeed, the multiple objectives () have conflicts of interest [1]. What becomes of great relevance to these types of problems, according to [1], is the concept of a Pareto-optimal solution, also called a compromise solution. The author in [1] refers to a feasible solution as Pareto-optimal if no other feasible solution exists such that , , with in at least one objective . In other words, a vector is said to be Pareto-optimal if no other solution can be found that causes a reduction in the objective function without causing a simultaneous increase in at least one of the other objectives. Pareto-optimal solutions occur because of the conflicting nature of the objectives, where the value of any objective function cannot be improved without impairing at least one of the others. In this context, a trade-off represents giving up one objective to improve another [4].

The purpose of multiobjective optimization methods is to offer support and ways to find the best compromise solution. Playing important roles in this are a decision maker and his preference information [4]. A decision maker, according to [4], is an expert in the domain of the problem under consideration and who typically is responsible for the final solution. In order to define the relative importance of each objective function, the decision maker must assign them different weights.

Because a characteristic property of multiobjective optimization is the objective function weighting problem, how a decision maker is involved with the solution of this problem is the basis for its classification. According to [5, 6], the classes are as follows: (1) no-preference methods: methods where no articulation of preference information is made; (2) a priori methods: methods where a priori articulation of preference information is used; that is, the decision maker selects the weighting before running the optimization algorithm; (3) interactive methods: methods where progressive articulation of preference information is used; that is, the decision maker interacts with the optimization program during the optimization process; and (4) a posteriori methods: methods where a posteriori articulation of preference information is used; that is, no weighting is specified by the user before or during the optimization process. However, as no classification can be complete these classifications are not absolute. Overlapping and combinations of classes are possible and some methods can be considered to belong to more than one class [5]. This paper considers the a posteriori method in consonance with generate-first-choose later approach [7].

A multiobjective problem is generally solved by reducing it to a scalar optimization problem, and, hence, the term scalarization. Scalarization is the converting of the problem, by aggregation of the components of the objective functions, into a single or a family of single objective optimization problems with a real-valued objective function [5]. The literature reports different scalarization methods. The most common is the weighted sum method.

The weighted sum method is widely employed to generate the trade-off solutions for nonlinear multiobjective optimization problems. According to [9], a biobjective problem is convex if the feasible set is convex and the functions are also convex. When at least one objective function is not convex, the biobjective problem becomes nonconvex, generating a nonconvex and even unconnected Pareto frontier. The principal consequence of a nonconvex Pareto frontier is that points on the concave parts of the trade-off surface will not be estimated [10]. This instability is due to the fact that the weighted sum is not a Lipshitzian function of the weight [11]. Another drawback to the weighted sums is related to the uniform spread of Pareto-optimal solutions. Even if a uniform spread of weight vectors is used, the Pareto frontier will be neither equispaced nor evenly distributed [10, 11].

Given its drawbacks, the weighted sum method is not used in this paper. Instead, the paper employs a weighted metric method, as cited by [12], in association with a normal boundary intersection method (NBI), as proposed by [13]. Reference [13] proposed the NBI method to overcome the disadvantages of the weighted sum method, showing that the Pareto surface is evenly distributed independent of the relative scales of the objective functions. It is for this feature that this study uses the NBI method to build the Pareto frontier.

The weighting issue has been discussed in literature for at least thirty years. Reference [14] proposed an approach to helping to determine proper parameters in linear control system design. In the case that the desired specifications are not given explicitly, this approach applies an interactive optimization method to search the most suitable weights. The shortcoming of this proposal is that the decision maker needs to make pairwise comparisons of the response curves in each iteration which becomes a problem when many iterations are needed. Reference [15] determined an optimum location for an undesirable facility in a workroom environment. The author defined the problem as the selection of a location within the convex region that maximizes the minimum weighted Euclidean distance with respect to all existing facilities, where the degree of undesirability between an existing facility and the new undesirable entity is reflected through a weighting factor. Reference [16] presented a multicriteria decision making approach, named Analytic Hierarchy Process (AHP), in which selected factors are arranged in a hierarchic structure descending from an overall goal to criteria, subcriteria, and alternatives in successive levels. Despite its popularity, this method has been criticized by decision analysts. Some authors have pointed out that Saaty’s procedure does not optimize any performance criterion [17].

In the course of time, other methods for deriving the priority weights have been proposed in literature, such as geometric mean procedure [18, 19], methods based on constrained optimization models [20], trial and error methods [21], methods using grey decision [2224], methods using fuzzy logic [3, 18, 19, 25, 26], and methods using simulated annealing [26, 27].

Recently [28], dealing with multiobjective optimization of time-cost-quality trade-off problems in construction projects, used Shannon’s entropy to define the weights involved in the optimization process. According to the authors, Shannon’s entropy can provide a more reliable assessment of the relative weights for the objectives in the absence of the decision maker’s preferences.

In the multiobjective optimization process, the decision maker, as noted above, plays an important role, for it is the decision maker that, sooner or later, obtains a single solution to be used as the solution to his original multidisciplinary decision-making problem. Hence, a worthwhile research question may be posed thus: In multiobjective optimization, what can facilitate for the decision maker choosing the best weighting?

In answering such a query, it was proposed to use two objectively defined selection criteria: Shannon’s Entropy Index [29] and Global Percentage Error (GPE) [30]. Entropy can be defined as a measure of probabilistic uncertainty. Its use is indicated in situations where the probability distributions are unknown, in search of diversification. Among the many other desirable properties of Shannon’s Entropy Index, the following were highlighted: (1) Shannon’s measure is nonnegative, and (2) its measure is concave. Property is desirable because the Entropy Index ensures nonnull solutions. Property is desirable because it is much easier to maximize a concave function than a nonconcave one [31]. The GPE, as its name declares, is an error index. In this case, the aim was to evaluate the distance of the determined Pareto-optimal solution from its ideal value.

Thus, this study attempts to propose a method that can identify the optimal weights involved in a multiobjective formulation. The proposed method uses both a Normal Boundary Intersection (NBI) approach along with Mixture Design of Experiments and, as selection criteria of optimal weights, has the functions of Entropy and Global Percentage Error (GPE).

This paper is organized as follows: Section 2 presents the theoretical background, including a discussion of Design of Experiments, the NBI approach, and optimization algorithm GRG. Section 3 presents the Metamodeling, showing the proposal’s step-by-step procedure. Section 4 presents a numerical application to illustrate the adequacy of the work’s proposal. Finally, conclusions are offered in Section 5.

2. Theoretical Background

2.1. Design of Experiments

According to [32], an experiment can be defined as a test or a series of tests in which purposeful changes are made to the input variables of a process, aiming thereby to observe how such changes affect the responses. The goal of the experimenter is to determine the optimal settings for the design variables that minimize or maximize the fitted response [33]. Design of Experiments (DOE) is then defined as the process of planning experiments so that appropriate data is collected and then analyzed by statistical methods, leading to valid and objective conclusions.

According to [32], the three basic principles of DOE are randomization, replication, and blocking. Randomization is the implementation of experiments in a random order such that the unknown effects of the phenomena are distributed among the factors, thereby increasing the validity of the research. Replication is the repetition of the same test several times, creating a variation in the response that is used to evaluate experimental error. The blocking should be used when it is not possible to maintain the homogeneity of the experimental conditions. This technique allows us to evaluate whether the lack of homogeneity affects the results.

The steps of DOE are [32] recognition and problem statement; choice of factors, levels, and variations; selection of the response variable; choice of experimental design; execution of the experiment; statistical analysis of data; conclusions and recommendations.

Regarding the experimental projects, the most widely used techniques include the full factorial design, the fractional factorial design, the arrangements of Taguchi, response surface methodology, and mixture Design of Experiments [32].

For the modeling of the response surface functions, the most used experimental arrangement for data collection is the Central Composite Design (CCD). The CCD, for factors, is a matrix formed by three distinct groups of experimental elements: a full factorial or fractional , where is the desired fraction of the experiment; a set of central points (); and, in addition, a group of extreme levels called axial points, given by . The number of experiments required is given by the sum: . In CCD the axial points are within a distance of the central points, being [32].

In mixture Design of Experiments, the factors are the ingredients or components of a mixture, and consequently, their levels are not independent. For example, if indicate the proportions of components of a mixture, then [32]. The most used arrangement to plan and conduct the mixture experiments is the simplex arrangements [34].

A disadvantage with the simplex arrangements concerns the fact that most experiments occur at the borders of the array. This results in few points of the internal part being tested. Thus, it is recommended, whenever possible, to increase the number of experiments by adding internal points to the arrangements, as the central points and also the axial points. In the case of arrangements of mixtures, it is noteworthy that the central points correspond to the centroid itself.

2.2. Normal Boundary Intersection Approach

The normal boundary intersection method (NBI) is an optimization routine developed to find Pareto-optimal solutions evenly distributed for a nonlinear multiobjective problem [13, 35]. The first step in the NBI method comprises the establishment of the payoff matrix , based on the calculation of the individual minimum of each objective function. The solution that minimizes the th objective function can be represented as . When it replaces the optimal individual in the remaining objective functions, it becomes. In matrix notation, the payoff matrix can be written as [36]The values ​​of each row of the payoff matrix , which consists of minimum and maximum values ​​of the th objective function , can be used to normalize the objective functions, generating the normalized matrix , such as [36]This procedure is mainly used when the objective functions are written in terms of different scales or units. According to [11], the convex combinations of each row of the payoff matrix form the convex hull of individual minimum (CHIM). The anchor point corresponds to the solution of single optimization problem [37, 38]. The two anchor points are connected by the Utopia line [38].

The intersection point between the normal and the nearest boundary of the feasible region from origin corresponds to the maximization of distance between the Utopia line and the Pareto frontier. Then, the optimization problem can be written as [36]This optimization problem can be solved iteratively for different values ​​of , creating a Pareto frontier uniformly distributed. A common choice for was suggested by [13, 37] as .

The conceptual parameter can be algebraically eliminated from (3). For bidimensional problems, for example, this expression can be simplified as [36]

2.3. Optimization Algorithm: Generalized Reduced Gradient

The generalized reduced gradient (GRG) algorithm, according to [39], is one of the gradients methods that presents greater robustness and efficiency, which makes it suitable for solving a wide variety of problems. Moreover, [40] highlighted the easy access to this algorithm: besides being applicable to many nonlinear optimization problems constrained or unconstrained, it is usually available in commercial software, as Microsoft Excel.

GRG is known as a primal method, often called the feasible direction method. According to [41], it has three significant advantages: if the search process ends before confirmation of the optimum, the last point found is feasible due to the fact that each point generated is viable and probably close to the optimum; if the method generates a convergent sequence, the limit point reaches at least a local minimum; most of the primal methods are generally absolute, not depending on a particular structure, such as the convexity. The GRG algorithm also has as one of its characteristics the fact that it provides adequate global convergence, especially when initialized close enough to the solution [42].

The search for the optimal point ends when the magnitude of the reduced gradient reaches the desired value of error (convergence criterion). Otherwise, a new search is performed to find a new point in the direction of the reduced gradient. This procedure is repeated until the best feasible solution is found.

Once the problem has been modeled, it can be applied to the previously proposed optimization system.

3. Metamodeling

Many of the techniques used in these strategies rely, at least in one of its stages, on imprecise and subjective elements. Hence, the analysis of weighting methods for multiple responses demonstrates that, since a large portion of strategies still use elements liable to error, significant contributions can still be made.

The effort to contribute to this topic consists of developing an alternative for the identification of optimal weights in problems of multiobjective optimization. Statistical methods based on DOE are important techniques to model objective functions. Indeed, for most industrial processes, the mathematical relationships are unknown. The insertion of optimization algorithms takes place during the step of identifying optimal solutions for the responses and for the weights, after they have been modeled by the statistical techniques mentioned above. The GRG algorithm is used by the Excel Solver function. The NBI approach is also used in the search for optimal weights, using as selection criteria the functions Entropy and Global Percentage Error (GPE).

As shown in Figure 1, to reach the weighting methodology proposed in this work the following procedures are used.

Step 1 (experimental design). It is the establishment of the experimental design and execution of experiments in random order.

Step 2 (modeling the objective functions). It is the definition of equations using the experimental data.

Step 3 (formulation of the problem of multiobjective optimization). It is the aggregation of the objective functions into a multiobjective formulation using a weighted metric method.

Step 4 (definition of mixtures arrangement). In order to set the weights to be used in the optimization routine described in Step 3, a mixtures arrangement is done using Minitab 16.

Step 5 (solution of the optimization problem). The optimization problem of Step 3 is solved for each experimental condition defined in Step 4.

Step 6 (calculation of Global Percentage Error (GPE) and Entropy). GPE of Pareto-optimal responses is calculated, defining how far the point analyzed is from the objective function’s ideal value, namely, the target.

Step 7 (modeling of GPE and Entropy). The canonical polynomial mixtures for GPE and Entropy are determined, using as data the results of the calculations from Step 6.

Step 8 (formulation of the problem of multiobjective optimization involving GPE and Entropy functions). Once the GPE and the Entropy functions have been defined, they are aggregated into a formulation of multiobjective optimization, using the NBI methodology. Through the use of this method, it is possible to define a Pareto frontier with evenly distributed solutions, regardless of the convexity of the functions.

Step 9 (defining the optimal weights). To achieve the optimal weights, the relation between Entropy and GPE was maximized. These parameters are, in this proposal, the selection criteria for optimal weights.

Once this procedure has been performed and the optimal weights have been achieved, the multiobjective optimization should be performed until it reaches the optimal values for decision variables in the original problem.

4. Implementation of the Proposed Method

In order to apply the method proposed in this work, the experimental data presented in [8] was used. The authors aimed to optimize, with the application of DOE, a process of vertical turning to determine the condition that led to a maximum life of the cutting tool (mm), high productivity (parts/hour), and minimum cost (US$/part).

For the modeling of the response surface functions, the authors used the CCD. As previously mentioned, the CCD for factors is a matrix formed by three distinct groups of experimental elements: a full factorial or fractional , where is the desired fraction of the experiment; a set of central points (cp); and, in addition, a group of extreme levels called axial points, given by . The number of experiments required is given by the sum: . In CCD the axial points are within a distance of the central points, being . Using as the decision variables feed rate (mm/rev) and rotation (rpm) of the cutting tool, a full factorial design 22 was performed, with 4 axial points and 5 center points, as suggested by [43], generating 13 experiments (Table 1). The graphical representation is shown in Figure 2, extracted from [32].

According to [32], in running a two-level factorial experiment, it is usual to anticipate fitting the first-order model, but it is important to be alert to the possibility that the second-order model is really more appropriate. Because of this, a method of replicating center points to the design will provide protection against curvature from second-order effects and allow an independent estimate of error to be obtained. Besides, one important reason for adding the replicate runs at the design center is that center points do not affect the usual effect estimates in a design.

The decision variables were analyzed in a coded way in order to reduce the variance. Only at the end of the analyses were they converted to their uncoded values. The parameters used in the experiments and their levels are shown in Table 2.

The analysis of experimental data shown in Table 1 generated the mathematical modeling presented in Table 3. An excellent fit can be observed, once adjusted is greater than 90% for all responses.

Based on the data presented in Table 3, applying the weighting method for multiobjective optimization proposed in this paper was started. It is important to mention that Tables 1 and 3 are equivalent to Steps 1 and 2, respectively, as described in this work.

To implement the optimization routine described in Step 3, the payoff matrix was estimated initially, obtaining the results reported in Table 4.

Once the objective functions have been defined, they are aggregated into a formulation of multiobjective optimization, by a weighted metric method; thus [12]where is the global objective function, is the ideal point or the best result individually possible, and the values and are obtained in the payoff matrix. The expression describes the constraint to a region of spherical solution, where is the radius of the sphere.

Once Step 3 was implemented, an arrangement of mixtures for the weights of each objective function (Step 4) was defined. Due to the constraint , the use of the mixtures arrangement is feasible.

Subsequently, the solution of the optimization problem of Step 3 was obtained for each experimental condition defined by the arrangement of mixtures (Step 5).

Based on these results, the GPE and the Entropy (Step 6) were calculated and the results are shown in Table 5. The GPE is calculated, as shown by [30], through expressionwhere is the value of the Pareto-optimal responses; is the targets defined; is the number of objectives.

In order to diversify the weights of multiobjective optimization, Shannon’s Entropy Index [29] is calculated, using the Pareto-optimal responses, through the expression:Figure 3 shows the Pareto frontier obtained. It was observed that life of cutting tool has negative correlation of −0.917, significant at 1%, with the productivity parameter. This occurs because, to obtain higher productivity, it is necessary to increase the cutting speed in the process, exposing the tool to increased wear. In a similar way, the life of the cutting tool has a negative correlation of −0.968, significant at 1%, with the cost per part. This occurs because the largest portion of process cost is due to the cost of the cutting tool. In order to maximize the life of cutting tool, the cost per part is reduced. Interestingly, the productivity parameter and the cost per piece have positive correlation of 0.787, significant at 1%. The explanation for this behavior is also based on the fact that the cutting tool is responsible for most of process cost. In order to achieve greater productivity, it is necessary to increase the cutting speed, compromising tool life. Thus, the increase in productivity, in the analyzed case, is not enough to offset the increased costs generated by the increased wear of the cutting tool.

The weighted metric method [12] used in Step 5 was unable to yield an even distribution of the Pareto-optimal points along the frontier. This drawback will be overcome with the use of the NBI method [13].

From the calculation of the GPE and Entropy, the mathematical functions were modeled (Step 7):To implement the routine of multiobjective optimization described in Step 8, the payoff matrix was estimated initially. The results are shown in Table 6.

Based on the payoff matrix, it was possible to iteratively implement (4), choosing in the range . Using this equation, and the parameters from [13], 21 points were achieved and the Pareto frontier was built for the GPE and Entropy functions. Figure 4 shows the Pareto frontier, built using the NBI methodology, for the GPE and Entropy functions with the optimum point achieved with the weighting methodology proposed being highlighted.

Lastly, Step 9 was executed. To achieve the optimal weights, the following routine was considered:By the maximization of , described in (9), the optimal weights , , and were found. The values are as follows: (weight of life of cutting tool) = 0.48766; (weight of productivity) = 0.03578; and (weight of cost) = 0.47656.

These optimal weights were used in a multiobjective optimization of life of the cutting tool, productivity, and cost, as (5), reaching the values of 3,001, 1,735, and 0.03566, respectively. The optimal values of the decision variables are as follows: feed rate = 0.37 mm/rev and rotation = 250 rpm.

For each point in the Pareto frontier for the GPE and Entropy functions, there is a set of values for the weights , , and . In order to build a Pareto frontier these weights were employed, using (5), to optimize the objective functions “life of cutting tool,” “productivity,” and “cost.” The results of the optimization process and the Pareto frontier with the indication of the optimal are presented in Table 7 and Figure 5.

The analysis of the data generated with NBI method shows that the behavior of the correlations between the results of the parameters life of cutting tool, cost per part, and productivity was the same as that obtained previously when the weighted metric method was used.

However, in Figure 5, it was found that the distribution of Pareto-optimal points on the frontier is evenly distributed. Using the NBI method for the selection criteria—Entropy and GPE—it can reach the parameters for weights that ensure an even distribution when it solves the original multiobjective problem using the weighted metric method, as in (5). Moreover, using as selection criteria Entropy and GPE, in Figure 5 the best fitted point (the highlighted one in the figure) can be found. With that proposal, the optimal point in the frontier that was, at the same time, the more diversified one and the one with the lowest error when comparing the ideal value for each objective function was discovered.

5. Conclusions

This work aimed to propose a method that can identify the optimal weights involved in a multiobjective formulation, in a nonsubjective manner. The lack of works that are proposed to this end is evidence of this work’s relevance. The definition of these weights is also important because this information can be useful to the decision maker in decision making process.

Thus, this paper has presented a methodology for defining the optimal weights that, by using Design of Experiments (DOE), has generated optimum values for the decision variables that can be implemented in the vertical turning process analyzed herein. This method presents itself as easy to implement, without generating large computational demand since the tools are available in popular software such as the Solver function of Excel and Minitab.

The Entropy and the Global Percentage Error (GPE) function, used as a criterion for evaluating Pareto-optimal solutions, were identified as suitable indicators, enabling their modeling via a polynomial of mixtures that delimited a region of maximum diversification and minimum error for the weight combination analyzed.

Another finding in this study was the possibility of constructing, in an easy way, an evenly distributed Pareto frontier for more than two objectives. With the present proposal, the Entropy and the GPE can be calculated for any number of objective functions and the Pareto frontier and the optimal weights can be reached using the NBI method as described. This is an advantage, mainly when the computational economy is considered.

Thus, the main contributions of this study are the proposal of a structured method, differentiated in relation to the techniques found in the literature, of identifying optimal weights for multiobjective problems and the possibility of viewing the optimal result along the Pareto frontier of the problem. This viewing possibility is very relevant information for the more efficient management of processes. Moreover, it can be stated that the proposed method promotes maximum achievement among multiple objectives, that is, between a set of Pareto-optimal solutions, being able to identify the best optimal, based on the aforementioned selection criteria.

As a suggestion for future work, the application of the presented methodology in other industrial processes is proposed. Their application, a priori, is possible in various contexts and with different numbers of objective functions. Besides, the proposal is applicable to many studies using stochastic programming where it is necessary to include, at the same time, the mean and variance in the objective function, as in the works developed by [33, 44].

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the Brazilian Government agencies CNPq, CAPES, FAPEMIG, and IFSULDEMINAS for their support. Moreover, the authors would like to thank the agreement signed between the company Intermec Technologies Corporation (Honeywell) and the Federal University of Itajuba (Process ID 23088.000465/2014-57).