Abstract

The integration of a decision maker’s preferences in evolutionary multi-objective optimization (EMO) has been a common research scope over the last decade. In the published literature, several preference-based evolutionary approaches have been proposed. The reference point-based non-dominated sorting genetic (R-NSGA-II) algorithm represents one of the well-known preference-based evolutionary approaches. This method mainly aims to find a set of the Pareto-optimal solutions in the region of interest (ROI) rather than obtaining the entire Pareto-optimal set. This approach uses Euclidean distance as a metric to calculate the distance between each candidate solution and the reference point. However, this metric may not produce desired solutions because the final minimal Euclidean distance value is unknown. Thus, determining whether the true Pareto-optimal solution is achieved at the end of optimization run becomes difficult. In this study, R-NSGA-II method is modified using the recently proposed simplified Karush–Kuhn–Tucker proximity measure (S-KKTPM) metric instead of the Euclidean distance metric, where S-KKTPM-based distance measure can predict the convergence behavior of a point from the Pareto-optimal front without prior knowledge of the optimum solution. Experimental results show that the algorithm proposed herein is highly competitive compared with several state-of-the-art preference-based EMO methods. Extensive experiments were conducted with 2 to 10 objectives on various standard problems. Results show the effectiveness of our algorithm in obtaining the preferred solutions in the ROI and its ability to control the size of each preferred region separately at the same time.

1. Introduction

Most real-world optimization problems usually contain two or more conflicting objective functions. These objective functions must be optimized simultaneously. This type of problem is known as a multi-objective optimization problem (MOP).

In MOPs with contradictory objectives, a single solution that can be considered the best does not always exist. Instead, a set of solutions represents the best compromises among the different objectives. This set, which belongs to the search space, is known as the Pareto set (or efficient set), whereas its images, which belong to the objective space, are known as Pareto front (PF) [1, 2]. Several evolutionary multi-objective optimization (EMO) algorithms, such as NSGA-II [3], SPEA2 [4], and MOEA/D [5], have been suggested in the past two decades or more. Classical EMO mainly aims to obtain a set of well-converged and well-distributed non-dominated solutions that approach the entire PF [6, 7]. Researchers have devoted their effort to developing algorithms in recent years [814].

The proportion of non-dominant solutions rises as the number of objectives increases, which is one of the fundamental problems with all EMO approaches. Due to the insufficient selection pressure caused by a high percentage of non-dominant solutions, the EMO approach cannot advance in finding the optimal spots. Incorporating the decision maker’s preferences into the algorithm is a practical way to deal with this issue. A new kind of ranking mechanism [9] can be used to make selection pressure stronger and steer the optimization approach to search in a specified region.

In real life, the DM is always focused on some specified subsets of the obtained solutions. The techniques of preference-based MCDM aim to find a part of the PF, whereas EMO algorithms aim to obtain a well-distributed set of points close to the whole PF. We call the part of the optimal solutions that is near to or lies on the PF a region of interest (ROI) [15]. Solutions within the ROI satisfy the DM’s need. However, this scenario does not mean that any efficient solutions outside the ROI are not the optimal solutions to the problem. The preference information given by the DM in the EMO can enable a highly efficient search. Many different approaches to preference information given by DMs exist, such as reference point (RP), preference angle, and reference weights. One of the most utilized approaches in preference-based EMO algorithms is the RP. As mentioned above, EMO tries to find well-distributed multiple efficient solutions across the whole PF, as displayed in Figure 1(a). Also, this figure illustrates the feasible objective region and the unfeasible objective region. On the contrary, preference-based EMO algorithms concentrate on a certain part of the true PF based on a preference point (reference point) determined by DM. The non-dominated points cluster near the RP, as shown in Figure 1(b).

The following is a typical classification of methods based on preferences, depending on how they are expressed by the DM [1618]: (i) a priori methods, where preferences are expressed before calculating PO solutions, for example, through a utility function [19] or by an RP [20]; (ii) a posteriori methods, in which the DM chooses the solution of her/his preference after a set of efficient solutions has been calculated (for example, [21, 22]); (iii) interactive methods, where the DM guides the search with a utility function, and this function may change during the optimization process because of the new information acquired (for example, [23, 24]); and finally, (iv) methods not based on preferences, where additional information on preferences is not available, and the idea is to find a balance between the objectives [25].

Over the past two decades, researchers have focused their attention on preference-based EMO approaches. These approaches have been actively developed, and they mainly focus on specific parts of the PF. Depending on the preference information supplied by the DM, these algorithms seek to find an ROI that is close to/on the true PF.

Numerous algorithms of preference-based EMO have been introduced. Deb and Sundar [26] suggested the RP-based NSGA-II (R-NSGA-II), which focuses on obtaining a preferred ROI during the evolutionary process. By including the RP’s location information in the Pareto dominance, Molina et al. [27] initiated a concept of Pareto dominance termed g-dominance. Ben Said et al. [28] presented a novel variant of the Pareto-dominant relationship, called r-dominance, with which we can obtain good convergence to the PF. Ruiz et al. [29] proposed WASF-GA, another variant of the preference-based MOEA algorithm. Yu et al. [30] suggested a different representative preference-based decomposition MOEA by decomposing the preference information into several scalar optimization problems. Recently, new R-NSGA-II modified methods have been proposed to assist DMs in convergent to Pareto-dominance compliant solutions in a specific area of interest [3133].

Although many preference-based algorithms use various metrics to select preferred solutions, some of these metrics require prior knowledge of the PF while others require specific parameters [34, 35]. S-KKTPM does not require prior knowledge of the PF or any parameters.

Herein, we introduce a novel preference-based NSGA-II algorithm. The Euclidean distance was utilized in the original R-NSGA-II study as a metric between two trade-off solutions. In our study, we use the simplified Karush–Kuhn–Tucker proximity metric (S-KKTPM) instead of the Euclidean distance metric. S-KKTPM can anticipate the convergence behavior of a point from PF without prior knowledge of the optimum solution [36, 37]. The Karush–Kuhn–Tucker (KKT) conditions occupy a significant role in optimization theory. KKT proximity measure was proposed through these conditions. Incorporating S-KKTPM within the R-NSGA-II provides theoretical convergence properties for the final preferred points. The main contributions of the introduced algorithm are listed below:(i)We introduce a new RP-based algorithm called RS-KKTPM, based on the S-KKPM metric, by integrating S-KKPM with NSGA-II to obtain the PO solutions in ROI.(ii)Obtaining different ranges of ROI in a single run.(iii)Adding flexibility for several RPs at the same time.(iv)Obtaining excellent performance when the RP is located in different regions.(v)Obtaining a good balance between convergence and diversity aspects around the RP.(vi)Solving different shapes of PF (e.g., convex, concave, concave, and discontinuous) with a different number of objective functions (up to 10 objectives).(vii)Making the results competitive compared with those of the other preference mechanisms on many-objective problems.

The layout of this work is as follows. Section 2 reviews some fundamental definitions. An overview of the works relevant to this paper is mentioned in Section 3. In Section 4, the R-NSGA-II algorithm is combined with the S-KKTPM metric. In the following section, the obtained experiments and the results are discussed and described. Section 6 summarizes the paper’s achievements and presents some upcoming works. Table 1 displays the nomenclature/abbreviations used in this study.

2. Basic Definitions

An MOP contains a set of decision variables, objective functions, constraints of inequality, and constraints of equality. MOP can be defined as follows [28].where is an -dimensional decision variable vector, are the objective functions, and and are the constraints of the problem.

In an MOP with contradictory objectives, the search space is only partially ordered, and two solutions may be indifferent to each other. A single decision variable simultaneously optimizing all the objectives is unusual. Consequently, for MOPs, the , and = operators are extended as follows.

Definition 1 (Pareto dominance relation). Given two solutions , is said to dominate y in the Pareto sense (denoted by ) if and only if and where .

Definition 2 (non-dominated solution). A solution ( is the feasible space) is said to be non-dominated if and only if there does not exist another solution such that .

Definition 3 (Pareto-optimal (PO)). A solution is said to PO if is non-dominated with respect to .
The set of solutions in the search space is called the Pareto solution set (PS). In contrast, the set of all non-dominated vectors in the objective space corresponding to the PS is called the PF [38].

Definition 4 (PS and PF). The PS is defined as follows:The corresponding PF is defined as follows:

Definition 5 (RP). An RP is defined in objective space, where is provided by the DM.

Definition 6 (ROI). The ROI is the projection of the set of preferred efficient solutions in the objective space, i.e., , where is the closest to the RP . denotes the radius of the ROI, which is determined by DM.

In Section 3.1, KKT optimality conditions are briefly reviewed. Section 3.2 presents R-NSGA-II in detail.

3.1. KKT Conditions

KKT conditions play an important role in optimization theory. Through these conditions, it is possible to know if the solution produced by the EMO algorithm is the PO solution or not. For the MOP with inequality constraints, KKT conditions are defined as follows [39]:

The parameters and are called the Lagrange multipliers for the ith objective function and jth inequality constraint, respectively. Any solution that satisfies each the above conditions is called a KKT point. Equations (4) and (6) are called the equilibrium and complimentary slackness equations, respectively. The conditions stated in equation (5) ensure feasibility for while the conditions stated in equation (7) ensure that the parameters are non-negative. The conditions stated in equation (8) also ensure that the parameters are non-negative, but at least one of them must be non-zero. In the following section, we briefly discuss R-NSGA-II algorithm.

3.2. R-NSGA-II Algorithm

As mentioned in Section 1, classical EMO algorithms mainly aim to develop a finite number of random solutions into a set of non-dominant solutions that converge and distribute across the entire PF over several generations. On the contrary, preference-based algorithms aim to produce non-dominated solutions centered around the desired part (s) of the PF based on the preference information supplied by the DM. This information can be given in several techniques: RPs, aspiration levels, weights, and reference direction [2]. RPs are one of the most used techniques in preference-based EMO algorithms. Usually, an RP is said to be achievable if it lies in the feasible objective space; otherwise, it is said to be unachievable.

In 2006, Deb and Sundar [26] put forward R-NSGA-II method, which presents the DM’s preferences as one or more RPs. The method is based on the benchmark manner, which is based on preference information [40]. It is a modification of the widely used EMO approach NSGA-II, in which an Euclidean distance metric is applied instead of the crowding distance metric from the RP that indicates DM’s preference. The primary notion behind R-NSGA-II is to give preference to parents who have short Euclidean distances to the RP. The following is the description of the R-NSGA-II procedure: (of size ) is a randomly generated parent population. A new offspring population (of size ) is generated using the number of operations (binary tournament selection, recombination, and mutation). Thereafter, the populations and are combined, and the resulting population (size ) is classified according to dominance in fronts. The new population is built starting with the fronts with the lowest rank until reaching a front , which cannot be accepted without making the size of population to exceed . Next, the preference operator is applied to the front to maintain the size of the new population. The final front , which cannot be fully accepted, is then considered, and the remaining slots are filled according to an environmental selection approach. The Euclidean distance for each RP is calculated with respect to each solution of the front . For each RP, the solution closest to the said point takes the preferred distance value of 1. The solutions that are closest to all of the RPs are given the shortest preferred distance. The preferred distance value of 2 is then applied to the solutions with the next smallest distance to each RP, and the process is repeated for the remaining solutions. In the generation of the new population of descendants, the preferred solutions in the selection by the tournament are those with a lower value of preferred distance.

The idea of - is utilized to maintain diversity in the solutions close to each RP. First, a solution of the front is randomly chosen. Next, the Euclidean distance in the objective space of all the solutions is computed with respect to the chosen solution. After that, the points that have a sum of the normalized difference in the objective search space values less than or equal to from the selected point are given an artificial large distance to remove them from the competition; in this method, only a solution within - is relevant. The process continues by randomly choosing another solution different from the previous one, to which the concept of based selection strategy described above is applied again.

3.2.1. Advantages and Disadvantages

Compared with classical RP-based algorithms, R-NSGA-II works well for high-dimensional MOPs; it is suitable for any frontier shape, several objectives, and variables. It also shows some advantages: the classical methods depend on the reference direction (weight vector); however, R-NSGA-II is independent of the weight vector. Moreover, the classical methods in most cases can only find efficient solutions for different RPs by applying the algorithm to each RP for several times, whereas R-NSGA-II can produce a set of efficient points for different RPs in a single simulation. RPs can exist anywhere in the objective space (achievable or unachievable). However, it requires a parameter to maintain a diversity of selected solutions near the RPs.

As mentioned above, the crowding distance metric of NSGA-II has been replaced using the Euclidean distance metric in R-NSGA-II to obtain the solutions closest to the RPs assigned by DMs. However, the final minimal Euclidean distance value is unbeknown. Thus, ascertaining whether the efficient solution is accomplished at the end of an optimization run is difficult. In other words, the Euclidean distance metric does not have any information about the proximity of a solution to the PF. Additionally, in the case of achievable RPs, the Euclidean distance metric may not be monotonically reduced to its minimum value. One major disadvantage of this method is that the DM cannot control the size of each preferred region separately. Furthermore, the DM cannot smoothly control the obtained PO solutions within each desired region. Below, we introduce a new approach that is based on integrating the S-KKTPM metric with the R-NSGA-II algorithm.

4. The Introduced R-NSGA-II with S-KKTPM

In Section 4.1, the development of the KKT-proximity measure is introduced. Section 4.2 presents the proposed RS-KKTPM in detail.

4.1. S-KKTPM

KKT conditions are necessary to know whether the solution obtained by the EMO algorithm is a KKT point. Hence, they play an important role in optimization theory [39, 41]. During the last decade, a KKT-proximity measure has been developed utilizing KKT optimality theory. In 2013, a KKT-based proximity metric (KKTPM) was suggested by Dutta et al. [42] to calculate a KKTPM value for any iteration (or solution) for a single-objective optimization problem. Deb and Abouhawwash et al. [37, 43] extended the above KKTPM for MOPs. Their expansion, which is based on the incorporation of the KKTPM metric via scalarization approaches, aims to relate the convergence property of a solution from a specific optimal solution. Other information on KKTPM for MOPs can be found in [37, 43].

In 2021, Eichfelder and Warnow [44] proposed a new KKTPM metric for MOPs without using any scalarization approach. The authors defined the following methodology for calculating the KKTPM value for any solution , for the MOP mentioned in equation (1):where and , respectively, are the numbers of constraint and objective functions. The value obtained after the optimization is the KKTPM at the point . First-order derivatives of constraint and objective functions are necessary to solve this problem. KKTPM metric can be utilized to single, multi, and many-objective optimization problems. The above problem has variables, inequality constraints, and one equality constraint. To reduce the number of constraints in the optimization problem mentioned above, we propose to redefine it as follows:where the variable vector of the above optimization problem is . The value of which solves the above problem, is referred to as the simplified KKTPM (S-KKTPM). The primary goal of reducing the number of constraints is to save the computational cost of an optimization problem. The above problem has variables, inequality constraints, and one equality constraint. The number of inequality constraints has been reduced compared to the optimization problem mentioned in equation (9) without affecting the optimization process. To ensure that values of both and obtained after the optimization are identical at point , first we consider the ZDT1 unconstrained problem with thirty variables [45]. We ran NSGA-II for 200 generations in this problem, with a population size of 40. Figures 2 and 3 illustrate the and values versus generation numbers for efficient solutions to the unconstrained ZDT1 problem. The minimum, 25th percentile, 50th percentile, 75th percentile, and maximum and values are also plotted for all PO solutions at each generation. Both figures show a constant reduction as the generation number increases. With a correlation coefficient of 0.9996, both figures show that the values and patterns of and are identical. Second, we consider the SRN unconstrained problem with two variables and two constraints [46]. In this problem, we also ran NSGA-II until generation 500, with a population size of 200. and values versus generations for obtained solutions are displayed in Figures 4 and 5. Both figures also show that the values and patterns of and are congruent, with a correlation coefficient of 0.9999.

An advantage of S-KKTPM metric given in equation (10) is that it predicts the convergence behavior of a point from the PF without prior knowledge of the PO solution. Now, we describe several features of the S-KKTPM [37, 43, 44]:(i)It can be utilized as a termination condition for the algorithm of optimization.(ii)It is applicable in high-dimensional MOPs; S-KKTPM is suitable for any frontier shape, large number of objectives, and variables.(iii)It provides a monotonous characteristic of the S-KKTPM surface over the objective space. S-KKTPM value decreases monotonously almost to zero as the iterate approaches the efficient solution. Figure 6 displays the S-KKTPM values for a set of efficient solutions located at different positions in objective space; for example, S-KKTPM value is zero in the true PO solutions (marked by blue circles), which lie on the PF. For efficient solutions, which are close to the PF (marked by green circles), S-KKTPM value is small. For far-away solutions from the PF (marked by white circles), S-KKTPM value is large.(iv)Calculating S-KKTPM value does not require any parameters, such as weight vector and ideal point, unlike when calculating values in other versions of KKT proximity measure.

In this study, we use S-KKTPM optimization problem to calculate value at iterate . We used MATLAB fmincon() algorithm optimization to solve S-KKTPM optimization problem (see Algorithm 1).

Input:
Output: S-KKTPM value
(1)begin
(2) Calculate and , , .
(3) Solve equation (10) utilizing MATLAB’s fmincon() function to find
(4)end
4.2. The Proposed RS-KKTPM

To make R-NSGA-II solutions preferred and acceptable to DMs and to easily control the size of each region, S-KKTPM metric is integrated with the-NSGA-II algorithm.

In this study, we refer to the RP-based S-KKTPM as RS-KKTPM. The introduced algorithm allows DMs to apply any number of RPs. RS-KKTPM also allows the DMs to control the size of the preferred parts separately. In the introduced algorithm, we replace the Euclidean distance metric, utilized in R-NSGA-II, with S-KKTPM metric. Solutions with small S-KKTPM values are chosen in the introduced method. The preference operator is utilized in this algorithm to select a subset of solutions from the final front that cannot be accommodated totally to maintain the size of population in the novel population. Instead of using the preference distance as in R-NSGA-II, this preference operator uses the preference S-KKTPM metric.

We now characterize an iteration of the introduced R-NSGA-II with S-KKTPM process in which the DM provides one or more RPs in the following section (see Algorithm 2). Both parents and children are merged as usual, and the non-dominated sorting strategy is employed to classify the merged population into non-domination levels (so-called fronts).

Input: Population size , set of reference points , Generation number, , Parameter
Output: Children
(1)Create initial parent population of size ;
(2)Repeat
(3)Generate offspring population from by applying selection, crossover, and mutation operators;
(4)Combine and population (i.e., );
(5)Classify into different fronts (, etc., where is the best non-dominated front, is the next best non-dominated front, and so on) utilizing non-dominated sorting algorithm;
(6)Calculate the S-KKTPM metric values of each front individual using the updated niching strategy specified in Algorithm 3;
(7)Create a new parent population by choosing individuals, which are closer to the better front and have the lowest S-KKTPM value;
(8)Until (maximum number of generations)
Input: Population , set of reference points , set of preference radius
Output: Offspring solutions;
(1)begactin
(2)for i 1 to do
(3)  for j 1 to do
(4)    = Euclidean distance between and ;
(5)  end for
(6) =  ;
(7)end for
(8)mid-point;
(9)for i 1 to do
(10)  for j 1 to do
(11)    = Euclidean distance between and ;
(12)  end for
(13)end for
(14)for i 1 to do
(15)  for j 1 to do
(16)   if, then
(17)    Calculate S-KKTPM value at iterate //Algorithm 1;
(18)   else
(19)    Set S-KKTPM equal to ;
(20)   end if
(21)  end for
(22)end for
(23)end

The following are the primary ideas underlying selecting the preferred set of solutions within the preferred range:(i)Solutions closest to RP are always prioritized.(ii)Preferred-region sizing strategy is used to control the preferred range near RP.(iii)-based selection strategy is utilized to keep the spread of solutions within the range assigned by the DMs.

The following changes are made to the original NSGA-II niching approach to integrate the three notions mentioned above:Step 1. Generating a desired region for each RP. The Euclidean distance between all members from the merged population and an RP is computed to specify the desired region. Then, the member that has the least Euclidean distance to RP is identified. The specified member (or point) is called mid-point as illustrated in Figure 7.Step 2. Determining the size of the desired region for each RP. Here, we introduce a new strategy to determine the size of each desired area as follows. The solution within distance of the mid-point is chosen to be in the desired area. Parameter is given by the DM, which determines the size of the ROI, as illustrated in Figure 7. This figure also shows how to choose a population of size eight from the merged population containing 17 members. All solutions in the first front are selected, as shown in Figure 7. Then, we need only two solutions from the second front. The remaining two solutions are chosen (from the second front) as follows. The S-KKTPM value is calculated for each solution within the ROI. Then, the minimum of the appointed ranks is appointed as the S-KKTPM value to a solution . If the solution is not within the preferred region, we set a high value for S-KKTPM (see Algorithm 3). In this manner, the smallest S-KKTPM value of one is given to the points that are closest to the PF. The next-to-smallest S-KKTPM value of two is given to the solutions with the next-to-smallest S-KKTPM value to the true PF, and so on. Finally, the solutions with the smallest S-KKTPM are preferred to survive and transition to the new population.Step 3. Good distribution of the obtained solutions. The -clearing selection strategy, employed in the original R-NSGA-II, is used in RS-KKTPM to control the diversity of chosen solutions near the RPs. A solution is selected randomly from the set of non-dominated solutions to implement this strategy. Then, any solution with a sum of normalized differences in objective values less than is selected and then given a high-preference distance value to discourage it from remaining in the next generations of the evolution process. The way is then repeated with a new solution picked from the set of efficient points (excluding the one previously selected). The value of is selected according to the application and can be different for each objective. Thus, it is formed as a parameter provided by the DMs.

Figure 8 depicts how to determine the size of the ROI for each RP using the mid-point strategy. As discussed in step 1 above, the mid-point is a member of the population that is closest to the RP. As shown in Figure 8, RP can exist anywhere in the objective region (feasible or unfeasible), whereas the mid-point can exist anywhere in the feasible objective domain only. The purpose of the proposal of mid-point strategy can be summarized as follows: (1) getting PO solutions that are close to the given RP; (2) determining the size of the ROI by calculating the Euclidean distance between each solution and the mid-point (each distance value is normalized using zero as the lower bound and one as the upper bound to stay within the interval [0, 1]; the solutions that lie within value are candidates to be within the ROI); and (3) obtaining a good convergence of solutions towards the ROI. As discussed in step 2 above, the S-KKTPM metric acts as a differentiator in selecting a solution that should remain in the next generations of the optimization process. The solution with the smallest S-KKTPM value is preferentially kept for the next generations because it is the closest to the true PF. This way, the RS-KKTPM can obtain good convergence of solutions towards the ROI. The introduced algorithm can well distribute solutions along the preferred part. RS-KKTPM works well with different RPs (feasible or infeasible) in the objective space, as displayed in Figure 8. In real-world applications, objectives should be normalized when they do not have the same units. Otherwise, is not a meaningful parameter.

One of the essential advantages of the introduced method is its ability to control the size of the preferred areas separately by a single simulation run (see Figure 8). This is done using the preferred-region sizing strategy discussed above, based on the S-KKTPM metric. This metric is used as a preference operator to select a subset of solutions close to the PF in order to move to the next population. As the iteration approaches the PF, S-KKTPM value decreases monotonically almost to the final minimum value (zero). This means that the S-KKTPM metric can know the proximity of a point in the search space to the PF. Through this strategy, the introduced algorithm can steer the solutions during the optimization process towards the preferred regions in proportion to the size of each area. In other words, the large ROI gets more PO solutions compared to the smaller preferred region.

On the other hand, the original R-NSGA-II algorithm cannot control the size of the preferred regions separately through a single run. The reason is the preferred-region sizing strategy used in this algorithm, which is based on the Euclidean distance metric. This metric is utilized as a preference operator in the R-NSGA-II algorithm. However, the Euclidean distance metric does not have the unique properties that the S-KKTPM metric does. For example, the final minimal Euclidean distance value is unknown. In other words, the Euclidean distance metric does not have any information about the proximity of a point to the PF. So, the R-NSGA-II algorithm cannot obtain different ranges of ROI in a single run.

5. Experimental Results and Discussion

This section uses a set of benchmark problems and engineering design problems to test our introduced methodology. Specifically, we adopted five two-objective unconstrained problems taken from the ZDT test suite [45], four bi-objective constrained problems (BNH, SRN, OSY, and TNK) taken from [46], and seven test problems having from three to ten objective functions taken from the DTLZ test suite [47]. In addition, we adopted two engineering design problems, the welded beam design problem with two objective functions (taken from [48]) and the car side impact design problem with three objective functions (taken from [49]). Then, we compare the performance of the RS-KKTPM approach with six EMO preference approaches, including R-NSGA-II, g-NSGA-II [27], r-NSGAII [28], R-NSGA-III [50], WV-MOEA-P [51], and MOEA/D-PRE.

The parameters of the suggested method are set as follows:(i)Reproduction operators: as suggested in original study [26], simulated binary crossover (SBX) probability and SBX index are set to 0.9 and 10, respectively, and the polynomial mutation probability and mutation index are set to and 20, respectively.(ii)Population size, maximum number of generations, RPs, and size of ROI : different parameters for a set of different test instances are displayed in Tables 2 and 3.

For constraint handling in constraint test problems and engineering design problems, we handled it by adding a penalty proportional to the constraint violation to the objective function value as suggested in the original NSGA-II algorithm. In minimization problems, this is a popular approach to deal with constraints in evolutionary algorithms.

The proposed RS-KKTPM algorithm is implemented in the MATLAB R2019a platform. The source codes for the comparison methods are provided by PlatEMO [51] or downloaded from the authors’ home page. The suggested and compared methods are simulated on a personal computer with an Intel(R)Core(TM)i7-7500 2.9 GHz Quad-Core Processor and 8 GB RAM.

5.1. Experiments on Two-Objective Unconstrained ZDT Problems

Now, we apply our proposed approach to ZDT1 unconstrained problem (it has a convex PF) with thirty variables. Figure 9 illustrates the influence of different values of on the distribution of solutions obtained by RS-KKTPM after 200 generations (i.e., 16000 evaluations, given that RS-KKTPM evaluates 80 offsprings per generation). Three RPs are chosen: , , and . These RPs are shown in the filled stars. and lie in infeasible search space while lies in feasible search space. The different values of corresponding to the RPs are detailed in Figures 9(a)9(c).

Through different values of , the proposed algorithm can steer the solutions towards the preferred regions in proportion to the size of each region. Parameter is still required to ensure that the obtained solutions are well distributed within preferred region. In this problem, the parameter  = 0.005 is chosen. The solutions obtained are clustered near the RPs, as shown in Figures 9(a)9(c). The distribution of the obtained PO set depends on the range of each desired region. In particular, the range of solutions obtained is equally vast when the value of is large. One of the advantages of RS-KKTPM is that it allows us to adjust the ranges for the desired region in a single run. Thus, if the DM wants to get a set of solutions (near each preferred region) whose number varies depending on the size of each preferred region separately, different values of can be chosen. In other words, the DM can control the spread of the generated ROIs by changing the value of parameter . If  = 0.5, the RS-KKTPM provides an approximation of the entire PF. On the contrary, Figure 10 shows the PO set produced utilizing R-NSGA-II for the same three RPs on ZDT1 problem. R-NSGA-II is also performed with  = 0.005 and a population of size 80. It is run until 200 generations. Figure 10 shows that the DM (by RNSGA-II) cannot obtain different regions of desired regions in a single run. Also, R-NSGA-II cannot steer the solutions toward the preferred regions in proportion to the size of each region.

Henceforth, the parameter  = 0.001 is used in all problems. First, we consider ZDT1 test problem with five RPs, of which three are infeasible and two are feasible, as shown in Figure 11. Each RP and corresponding size of ROI are shown in Table 2. RS-KKTPM is utilized for this problem, where the population members and the maximum number of generations are 40 and 200, respectively. The parameter is set to 0.05 for each ROI. Figure 11 also demonstrates how easy the proposed algorithm can be modified to address multiple RPs. As a result, it discovers various ROIs. Well-convergent non-dominated solutions are obtained on PF near all the five RPs.

ZDT2 is the next problem which has a non-convex PF. Two RPs are chosen, of which one is feasible and the other is infeasible, as presented in Table 2. The range of each region corresponding to an RP is also presented in Table 2. The population members and maximum number of generations, respectively, are 40 and 200. Figure 12 displays the convergence and distribution of the solutions near the two chosen RPs. As shown in the figure, RS-KKTPM algorithm can easily deal with feasible and infeasible RPs. The proposed algorithm proves its ability to converge and distribute the solutions obtained within the desired ranges provided by the DMs, as illustrated in Figure 12. RS-KKTPM also showed good distribution on this problem when RP is in the infeasible region.

The test problem ZDT3, with 30 variables, has a disconnected set of PFs. Three RPs are selected (see Table 2), of which one is infeasible and two are feasible. The desired solutions produced by RS-KKTPM and R-NSGA-II are illustrated in Figures 13 and 14. The population members were 40, and the maximum number of generations was 200. These two figures demonstrate that our approach is able to steer solutions towards the PF in proportion to the size of each ROI, while the R-NSGA-II cannot. As illustrated in Figure 13, our approach does not get stuck in any locally PO part, and all generated solutions are non-nominated and global PO solutions.

Next, the test problem of ZDT4 with 10 variables is solved utilizing RS-KKTPM and R-NSGA-II. This problem has many local PFs. One RP is used with a range of ROI of 0.15, as displayed in Table 2. The RP is (0.6, 0.6), and the generations are 500. The plot of the desired solutions produced by RS-KKTPM and R-NSGA-II is represented in Figures 15 and 16, respectively. As illustrated by the two figures, the performance of RS-KKTPM is much better than that of R-NSGA-II in terms of the distribution and convergence of solutions from the PF. As shown in Figure 15, the selected RP is somewhat far from the PF, which indicates the ability of the introduced approach to work well in the case of distant RPs. Thus, as shown in Figure 15, even though the problem has more than 100 local fronts, the introduced algorithm can converge well to the true PF.

Finally, we apply our proposed method to a ZDT6 problem that has a non-convex PF. Figures 17 and 18 display the obtained PO solutions by RS-KKTPM and R-NSGA-II, respectively. Both techniques used the same RPs, the same number of population members, and the same number of generations (see Table 2). Three RPs are chosen, of which the first lies in feasible search space, the second lies close to/on PF, and the third lies in infeasible search space. For RS-KKTPM, the sizes of ROIs corresponding to RPs (0.9, 0.4), (0.3, 0.8), and (0.64, 0.59) are 0.03, 0.05, and 0.10. For R-NSGA-II, the size of ROIs for all RPs is 0.10. Note that R-NSGA-II cannot adjust the size of each ROI separately, as in RS-KKTPM. As it is clear from Figure 17, the introduced algorithm can seek solutions towards the ROI in proportion to the size of each area separately, whereas R-NSGA-II cannot. The ROI corresponding to RP (0.64, 0.59), with , contains a large number of points compared to the ROI corresponding to RP (0.3, 0.8) with , as shown in Figure 17. Also, the ROI corresponding to RP (0.9, 0.4), with , contains a few number of solutions compared to the ROIs corresponding to RPs (0.3, 0.8) and (0.64, 0.59). This means that if the DM wants to get PO solutions of different sizes for all regions, the introduced algorithm can do that. In contrast, R-NSGA-II cannot control the number of solutions for each desired area. This is because parameter takes only one value for all preferred regions corresponding to the given RPs. In other words, when multiple RPs exist, R-NSGA-II cannot give different values for in a single run. This means that if the DM wants to get PO solutions of different sizes for all regions, the R-NSGA-II algorithm cannot do that.

5.2. Experiments on Two-Objective Constraint Problems

We now consider two-objective constraint problems: BNH, SRN, OSY, and TNK [46]. RPs and some essential parameters used to solve these problems are shown in Table 2. BNH, TNK, and SRN have only two constraints and two variables. First, the efficient solutions obtained by the RS-KKTPM on BNH with three RPs are illustrated in Figure 19. It is clear from this figure that our approach can find the desired regions near the RPs. Second, the solutions obtained on SRN with two RPs are displayed in Figure 20. The RS-KKTPM algorithm works well when the RP is in the feasible or infeasible domain, as displayed in Figure 20. Next, we consider the OSY test problem, which has six constraints and six variables. Figures 21 and 22 show the obtained solutions by RS-KKTPM and R-NSGA-II on OSY, respectively. Two RPs are chosen with a range of ROIs and a population = 40 (see Table 2). Although RS-KKTPM cannot converge to the true PF, it converges slightly better than R-NSGA-II, as shown in Figures 21 and 22.

Finally, the desired regions obtained by the RS-KKTPM and R-NSGA-II on the TNK problem are illustrated in Figures 23 and 24, respectively. Two RPs and population members are chosen as displayed in Table 2. As shown in Figures 23 and 24, the performance of our approach is a little similar to R-NSGA-II. In summary, the introduced approach balances diversity and convergence around ROI for constraint test problems and handles any number of predefined RPs.

5.3. Experiments on Three-Objective Problems

We will select the original DTLZ1, DTLZ2, and DTLZ5 and their scaled versions. Table 3 provides some information about these problems and some parameters required. First, the DTLZ1 problem contains many local PFs, possibly causing some points to stop. This scenario is a relatively complicated problem to address for global optimality. Figure 25 shows the obtained preferred PO solutions using RS-KKTPM and R-NSGA-II algorithms on the three-objective DTLZ1 problem, and the parameter values are presented in Table 3. The three aspiration points are chosen. The distribution and convergence of solutions found by RS-KKTPM are substantially superior to those by R-NSGA-II, as displayed in Figure 25. Next, RS-KKTPM is utilized to solve the three-objective DTLZ2 problem. Figure 26 shows the obtained solutions with 60 population members with two RPs. Figure 26 clearly illustrates that the RS-KKTPM algorithm can access the efficient region of true PF with few number of population sizes, thereby helping the DM determine the required ROI easily. Finally, the RS-KKTPM is utilized to solve the three-objective DTLZ5 problem. The two RPs, and , are used. The sizes of the preferred areas for and are 0.2 and 0.1, respectively, as shown in Table 3. Our algorithm is employed to solve this test problem with 60 populations and runs up to 300 generations. The obtained preferred areas of the true PO solutions are displayed in Figure 27. The obtained solutions are distributed according to the size of each preferred area. In a single simulation run, both areas are discovered. Note that the number of solutions generated in the first preferred area, corresponding to , is greater than that generated in the second preferred area, corresponding to . Well-convergent and well-distributed solutions are obtained on PF in all two RPs. Thus, the DM can control the size of each efficient region (s) of the true PF separately and in a single simulation run.

5.4. Experiments on Many-Objective Problems

Finally, we test our introduced approach on the many-objective versions of the problems of DTLZ1 and DTLZ2. Table 3 displays all parameters used for 5 and 10-objective problems. First, RS-KKTPM is used for and problems. Population sizes of 80 are used for the two problems. Figures 28 and 29 present the obtained part in a parallel coordinate plot. One RP is used for each problem, as shown in Table 3. The PO solutions of these problems must satisfy . RS-KKTPM can discover the needful regions of the efficient set corresponding to the one predefined RP by the DM.

Finally, the RS-KKTPM algorithm is applied for and with 14 and 19 decision variables. This algorithm is applied for these test problems with 60 populations and runs up to 300 generations. Figures 30 and 31 show the obtained solutions for and with two and one aspiration points, respectively. The PO solutions to these problems must obey the next equation: . When computing the left side of this equation for all generated PO solutions, all the values lie in the range [1.000002, 1.000170] for problem and [1.000043, 1.002780] for problem, indicating that every solution is very close to the true PF. Thus, RS-KKTPM can converge to the PF corresponding to the chosen aspiration points.

5.5. Experiments on Engineering Problems

We now apply the RS-KKTPM to a couple of engineering design problems. The first test problem has two objectives, while the second test problem has three.

5.5.1. Welded Beam Design Problem

First, we now employ a two-objective welded beam design problem [48] as a real-world example. The first objective is to minimize the cost of fabrication, whereas the other objective is to minimize the end deflection of the welded beam. The design of welded beam structure is shown in Figure 32. This problem involves four decision variables, namely, (weld thickness), (clip length), (the height of bar), and (the thickness of bar). It has also four non-linear constraints. The problem is mathematically formulated as follows [48]:where

Figure 33 shows the efficient solutions produced by RS-KKTPM, R-NSGA-II, R-NSGA-III, and MOEA/D-PRE [30], respectively, to the welded beam design problem. The relevant parameters of the four comparison algorithms are briefly presented below. Three RPs are chosen:  = (3, 0.005),  = (15, 0.003), and  = (25, 0.002). The comparison algorithms are used with 100 population members and run for 200 generations. For RS-KKTPM approach, the radius corresponding to each RP is  = 0.2,  = 0.1, and  = 0.05. For the rest of the algorithms, the size of preferred regions is equal to 0.1. Note that the proposed algorithm can control the size of each region separately, while other algorithms cannot. In this problem, all objective function values are normalized using the ideal point as the lower bound and the nadir point estimation as the upper bound to stay within the interval [0, 1]. We used (0, 0) as the ideal point and (36, 0.015) as the nadir point.

Figure 33(a) shows that the introduced algorithm outperforms the others by adjusting the size of each preferred region (separately) corresponding to the supplied RP. It has the ability to steer the solutions towards the preferred regions in proportion to the size of each area. As shown in Figure 33(a), the obtained solutions in the preferred region, corresponding to , are more compared to the obtained solutions in the other two preferred regions. Indeed, the size of the ROI, corresponding to , is greater than the sizes of the other two preferred regions, i.e., and . On the other hand, the preferred region, corresponding to , contains a few solutions compared to the other two preferred regions because and . Figure 33(a) also shows that the RS-KKTPM can produce well-distributed solutions along the preferred part. The advantages of the RS-KKTPM, discussed above, are mostly not found in R-NSGA-II, R-NSGA-III, and MOEAD-PRE (see Figures 33(b)33(d)). In summary, if the DM is interested in finding PO solutions in three main areas (intermediate cost and deflection, minimum cost, and minimum deflection), the introduced algorithm can find solutions near the given RPs, rather than finding solutions on the whole PF, allowing the DM to deal with only a few solutions that lie in parts of her/his interest. Moreover, if the DM is interested in finding these solutions within different sizes for all regions, the proposed algorithm can provide them.

5.5.2. Car Side Impact Design Problem

Car side impact design is a constrained optimization problem [49]. This problem has three optimization objectives which are described as follows: the first is to reduce the car’s weight, the second objective is to minimize the pubic force experienced by a passenger, and the last objective is to minimize the average velocity of the V-Pillar responsible for withstanding the impact load. It has seven decision variables: B-Pillar, door beam, B-Pillar inner reinforcement, floor side inner, door beltline reinforcement, cross members, and roof rail (see Figure 34). The mathematical model of this problem is as follows [49]:

Figure 35 shows the solutions produced by RS-KKTPM, R-NSGA-II, R-NSGA-III, and MOEAD-PRE on the car side impact design problem. In this problem, all relevant parameters of the four comparison approaches are briefly presented below. Two RPs are chosen: RP1 = (40, 3.5, 11) and RP2 = (26, 4, 11.5). For this test problem, the comparison algorithms are used with 80 population members and run until 300 generations. For the RS-KKTPM algorithm, the radius corresponding to each RP is  = 0.1 and  = 0.05. For the rest of the algorithms, the size of preferred regions is equal to 0.1. In this problem, all objective function values are normalized using the ideal point (15, 3, 10) and nadir point (50, 5, 14). Figure 35(a) shows that the suggested algorithm can control the number of solutions for each desired area in proportion to its size. In other words, a larger desired area gets more solutions, while a smaller preferred region gets fewer solutions. As displayed in Figure 35(a), the number of solutions for the first preferred area, corresponding to the , is more than that for the second desired area, corresponding to the . This is because the size of the first region is greater than the size of the second region, i.e., . Therefore, RS-KKTPM can control the size of each desired area separately, while the rest of its algorithms cannot do that (see Figures 35(b)35(d)). Thus, if the DM is interested in finding solutions in region of different sizes, the introduced algorithm can find solutions near the RPs and proportional to the size of each preferred area separately.

5.6. Performance Metrics

No single performance metric can provide an accurate assessment of an EMO’s performance [54]. In our empirical investigations, we use two of the most recognized performance metrics to assess the quality of preferable efficient solutions of preference-based EMO algorithms: R-HV and R-IGD. [55]. Both metrics are utilized to detect the ROI’s convergence and the diversity of efficient solutions simultaneously. They are based on two performance metrics, the hypervolume (HV) metric and the inverted generational distance (IGD) metric, which are designed for the entire PF and applicable for partial preferable efficient solutions. The larger the R-HV values are or the smallest the R-IGD values are, the better the performance of the tested algorithm is. Additional details can be found in Li et al. [55].

5.7. Performance Comparison with Other Preference-Based EMO Algorithms

We compare RS-KKTPM with six EMO preference algorithms, including R-NSGA-II, g-NSGA-II, r-NSGAII, R-NSGA-III, WV-MOEA-P, and MOEA/D-PRE, to verify RS-KKTPM performance. We determine the parameters of the mentioned algorithms in advance to approximate a similar ROI and make the experimental findings comparable. The parameters utilized in the comparative study are summarized as follows:(i)Reproduction operators: in all simulations, crossover probability = 0.9, mutation probability = , distribution index for SBX operator = 10, and distribution index for polynomial mutation operator = 20.(ii)Number of evaluations, population size, and RP coordinate setting: different parameters for a set of different test instances are displayed in Table 4.(iii)Number of runs: it is 21 for all algorithms on all test problems.(iv)Size of the preferred region: it is 0.1 for all algorithms on all test problems.(v)Parameters in r-NSGAII: the weight vector was set as .

As mentioned earlier, S-KKTPM requires the gradient of all objective and constraint functions. Then, algebraic calculations are performed to compute the theoretical closeness of to the true optimal solution. For MOPs, S-KKTPM calculates the closeness metric from a specific PO point. In this article, the introduced RS-KKTPM approach is then compared with six EMO approaches. For a fair comparison between all algorithms, we used equal function evaluations in the comparison.

Compared to the number of function evaluations required for a solution evaluation, the savings reported for the S-KKTPM calculation may not be significant because the evaluation of S-KKTPM for all solutions is an additional computational expense and requires more computation. Thus, once gradients are computed for a real-world problem, the computational time needed for the S-KKTPM optimization procedure would make a small addition to the overall computational time. In the meantime, S-KKTPM helps improve convergence and can differentiate between different non-dominated solutions that are not applicable by using the Euclidean distance or any other evolutionary algorithm.

Tables 5 and 6 display the mean and standard deviation of R-HV and R-IGD values, respectively. The best mean of R-HV and R-IGD metrics is highlighted in bold in Tables 5 and 6.

According to R-HV metric, the ROI is approximated by the introduced RS-KKTPM algorithm in a better way than other algorithms for all the examined problems except the ZDT4, , and test cases (see Table 5). We obtain almost the same results according to the R-IGD metric, as shown in Table 6. Based on the R-HV values and the R-IGD values, RS-KKTPM demonstrates better distribution and convergence than other algorithms. The practical findings on the 14 benchmark test problems illustrate that the RS-KKTPM approach outperforms the other approaches used in 11 of 14 comparisons.

6. Conclusions

In this study, the RS-KKTPM preference-based EMO algorithm is proposed. It is an expansion of the R-NSGA-II method, where the Euclidean distance metric is replaced by the S-KKTPM metric. The following are the properties of this new algorithm:(i)The RS-KKTPM can obtain the ROI at any specific position of the RP (in the feasible area, on/near the PF, and the infeasible area).(ii)The range of each obtained ROI can be controlled by adjusting the interest radius size of each ROI separately and in a single simulation run.(iii)The RS-KKTPM algorithm, given herein, improves the quality of the PF approximation and allows a uniform distribution of the approximating objective vectors.(iv)The performance of RS-KKTPM is better than that of R-NSGA-II, g-NSGA-II, r-NSGAI, R-NSGA-III, WV-MOEA-P, and MOEA/D-PRE on most multi and many-objective problems.

The direction of future research focuses on using the S-KKTPM metric to improve the performance of other EMO optimization algorithms by reference direction approaches, such as MOEA/D and NSGA-III. These approaches can also be utilized to solve engineering design problems and highly complex problems.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.