Abstract

Real-world multiobjective optimization problems are characterized by multiple types of decision variables. In this paper, we address weapon selection and planning problems (WSPPs), which include decision variables of weapon-type selection and weapon amount determination. Large solution space and discontinuous, nonconvex Pareto front increase the difficulty of problem solving. This paper solves the addressed problem by means of a multiobjective evolutionary algorithm based on decomposition (MOEA/D). Two mechanisms are designed for the complex combinatorial characteristic of WSPPs. The first is that the neighborhood of each individual is divided as selection and replacement neighborhoods. The second is that the neighborhood size is changing during the evolution by introducing a distance parameter to constrain the search scope of each subproblem. The proposed algorithm is termed as MOEA/D with distance-based divided neighborhoods (MOEA/D-DDNs) which can overcome possible drawbacks of original MOEA/D with weighted sum approach for complex combinatorial problems. Benchmark instances are generated to verify the proposed approach. Experimental results suggest the effectiveness of the proposed algorithm.

1. Introduction

Portfolio optimization problems and project planning problems are two classic topics in the areas of operational research and management science. Many optimization problems in industries, economics, and production can be modeled as portfolio problems or planning and scheduling problems. For instance, in defense planning areas, most countries have paid much attention to capability-based planning (CBP) from the beginning of this century [1]. CBP usually involves the development of new types of weapons for future capability fulfilment. Limited by budgetary, decision makers need to balance among different objectives such as cost, efficiency, and risk [24].

During the process of CBP, decision makers need to decide on both “what to buy” and “when to buy,” which correspond to the concepts of portfolio and project planning. However, in the existing literature, portfolio problems and project planning problems are two relatively independent branches. The former focuses on the item selection among available set, while the latter concerns activity arrangement with a time horizon. Simultaneous consideration of these two types of optimization processes is rather scant in the literature. In a previous research, the basic optimization process of weapon development in CBP is modeled as a weapon selection and planning problem (WSPP) [5].

In this paper, we refine the WSPP model and concern the design of effective multiobjective evolutionary algorithms to solve the problem. The main contributions of this paper are twofold:(1)This paper refines the multiobjective optimization model of WSPPs. One main trait of the WSPP is that each weapon equipment has its own service period. This is a main difference between WSPPs and other planning and scheduling problems. The previous work did not consider this attribute [5]. Weapon equipment has effectiveness only in their service periods. The calculation of solution effectiveness needs to consider the beginning point and the end point of each unit of weapon equipment. Therefore, the solution evaluation becomes more expensive. Thus, incorporating the attribute of the service period, fundamentally, changes the structure of the problem and increases the problem complexity. In the proposed model, simultaneous consideration of solution effectiveness and risk provides decision makers with important trade-offs. To illustrate the addressed problem, we generate three sets of test problems including 15 instances with different scales.(2)The decision space of the WSPP is discrete, and the Pareto front is discontinuous and nonconvex. Thus, multiobjective evolutionary algorithms (MOEAs) are preferred to solve the problem. Although classic MOEAs are reported and tested on many benchmarks, the performances of MOEAs for solving complex combinatorial problems are still open questions. We solve the problem in the framework of the multiobjective evolutionary algorithm based on decomposition (MOEA/D). In the MOEA/D, the weighted sum approach is simple and effective for convex problems, but the performance is limited when Pareto fronts are nonconvex and/or disconnected. Furthermore, components of original MOEA/D, such as neighborhood definition, selection, and replacement, might be problematic for combinatorial problems. To address these issues, a mechanism of distance-based divided neighborhood (DDN) is designed and incorporated into the MOEA/D. The proposed algorithm is termed as MOEA/D-DDN. Experimental results show that the DDN mechanism is effective and MOEA/D-DDN performs well for solving WSPPs.

The rest of the paper is organized as follows. In Section 2, we briefly review the literature on portfolio problems and capability planning problems. Problem formulation and proposed MOEA/D-DDN are described in Sections 3 and 4, respectively. Section 5 presents the genetic operators for WSPPs. Experimental results are reported in Section 6. Finally, some conclusions are drawn in Section 7.

Since the addressed WSPP is an amalgamation of portfolio and planning problems, we briefly review related works on these two areas in this section.

2.1. Portfolio Optimization with Evolutionary Computation

Portfolio problems inherently have multiobjective traits. Due to nonconvex constraints in practical cases, metaheuristic algorithms such as evolutionary computation (EC) are preferred to solve portfolio problems. Arnone et al. [6] was among the first attempts of applying evolutionary algorithms to solve the problem. The authors suggested the possibility of applying the genetic approach and implemented some of the ideas. Maringer and Kellerer [7] optimized cardinality constrained portfolios by designing an algorithm combining simulated annealing and evolutionary strategies. Cura [8] presented a particle swarm optimization approach to solve the portfolio optimization problem and compared the results with those of other metaheuristic approaches. Although the above works applied different algorithms to portfolio problems, one characteristic shared is that the problem was converted as the single objective optimization problem by a weighted sum approach.

As the development of MOEAs, last two decades have witnessed the growth of taking advantage of MOEAs for solving portfolio problems. Doerner et al. [9] studied project portfolio problems with objectives of benefit and remaining resources. The authors presented a two-phase procedure for the solution approach, where in the first phase, Pareto ant colony optimization was used to determine the solution space of all efficient portfolios. Branke et al. [10] solved portfolio problems with nonconvex constraints by integrating an active set algorithm into a MOEA, termed as envelope-based MOEA. Pai and Michel [11] first employed a k-means cluster analysis to eliminate the cardinality constraint and then designed an algorithm with evolution strategy to solve a subclass of portfolio optimization problems. Gutjahr et al. [12] and Kremmel et al. [13] addressed project portfolio problems. In Gutjahr et al. [12], economic and staff competence benefits are simultaneously optimized. The overall problem is decomposed into a master problem and a slave problem. The authors, respectively, implemented NSGA-II [14] and Pareto ant colony optimization to solve the subproblems. Kremmel et al. [13] focused on software project portfolio problems, and a multiobjective optimization approach based on the Prototype Optimization with Evolved iMprovement Steps (mPOEMS) was used to solve the problem. In Anagnostopoulos and Mamanis’s study [15], performances of different MOEAs for portfolio problems were compared. Experimental results suggest that performances of the Strength Pareto Evolutionary Algorithm 1 (SPEA1) and the Pareto Envelope-based Selection Algorithm (PESA) are comparable and better than that of NSGA-II. In a later research of the same authors [16], performances of five MOEAs for cardinality constrained portfolio optimization problems were investigated. For recent comprehensive surveys of solving portfolio problems by MOEAs, readers are referred to Metaxiotis and Liagkouras [17] and Ponsich et al. [18].

(1): original budget share index at each time unit
(2): actual available budget each year
(3): actual spent capital in each year
(4): available budget at each time unit
(5): spent budget at each time unit
(6): accumulated budget,
(7)for do
(8)
(9)for do
(10)  
(11)  if then
(12)   
(13)  end if
(14)  
(15)  U is the total number of unselected and unplanned weapon units;
(16)  index = 1;
(17)  listing the unselected weapon units with ascending order by ;
(18)  while and do
(19)    is the indexth lowest value;
(20)   ;
(21)   if then
(22)    bool  = false
(23)    
(24)    
(25)    for do
(26)     if then
(27)       = true
(28)      break;
(29)     end if
(30)    end for
(31)    if  = false then
(32)     weapon unit with is selected and planned;
(33)     
(34)    end if
(35)   else
(36)    
(37)    break;
(38)   end if
(39)  end while
(40)end for
(41)end for

Compared to other areas such as financial and project management, there are seldom applications of portfolio optimization in the defense area. Greiner et al. [19] studied the screening of weapon system development projects and their realistic application in the air force. In that research, the authors presented a decision support methodology integrating an analytic hierarchy process (AHP) and a 0-1 integer portfolio optimization model. Yang et al. [20] employed heuristic algorithms to deal with portfolio selection problems for military investment assets. Kangaspunta et al. [4] studied a weapon system portfolio problem in which a weapon portfolio is evaluated by an additive value function.

2.2. Planning and Scheduling in Defense Area

In defense areas, decisions on weapon acquisition given a time horizon can be referred to long-term planning problems. Abbass et al. [21] studied resource planning under time constraint problems in which the main task is optimizing the mix of vehicles to fulfil future tasks. The authors developed an evolutionary multiobjective approach to obtain efficient solutions. Bui et al. [22] considered the military capability planning problem as a combination of resource-constrained project scheduling and resource investment problems. The planning process was modeled as a multiobjective problem, and an evolutionary algorithm was designed to address the optimization task. In the planning of weapon systems, the effectiveness of a planning solution is usually affected by weapons hold by the counterpart. Golany et al. [23] focused on the problem of developing effective countermeasures given limited resources. Usually, weapon acquisition needs to simultaneously consider selection and planning processes to obtain overall optimal solutions. However, in project planning and scheduling areas, the literature simultaneously addressing the combination of these processes is rather scant, except for Ghasemzadeh et al. [24], Sun and Ma [25], Gutjahr et al. [12], and Carazo et al. [26]. In these works, decision makers are only required to determine whether a project will be selected or not. The problem addressed in this paper differentiates from models of the existing work by considering the amount of different weapon types at each decision point. Since weapon developments are subject to resource constraint and the effectiveness of a solution should be measured by considering all selected weapons, the problem needs to be solved at the overall level. In the weapon selection process, simultaneous consideration of the selected type and corresponding amount expands the decision space and dramatically increases the complexity of the problem. Thus, it is worthwhile to design more effective algorithms to solve WSPPs.

3. Weapon Selection and Planning Problems

3.1. Problem Description

A WSPP can be simply described as the process that decision makers select weapons and plan their developments at each decision time point. The planning horizon is divided into M units and is indexed by . Each unit indicates one month [12]. There are W types of weapons, and each weapon type has the following attributes:(i): the maximum available amount of each weapon type(ii): the cost for one unit of each weapon type(iii): the value for one unit of each weapon type(iv): the risk for one unit of each weapon type(v): the period of the weapon from the beginning of the weapon development to the deployment of the weapon(vi): the service period of each weapon type

Similar to revenue in project portfolio [13], initial values corresponding to each unit of weapon are pregiven. However, in our research, can be interpreted as the damage rate per unit time inflicted by the weapon on counterparts [23]. Without loss of generality, we assume that initial weapon unit value and risk are normalized so that and . We assume that military departments pay all costs to the contractors at the beginning of the developments. At each time point, decision makers need to decide the selected amount for each weapon type, denoted as . Then, the total selected amount for each weapon type can be calculated as , and the selected weapon units for each type is indexed by from 1 to . For a weapon type , the development starting time of a selected weapon unit is denoted as ; for instance, indicates the development starting time of the unit 2 of the weapon type 1.

At each time point, there are two entities need to be addressed for each weapon type:(i): the amount of weapon under development at time point m(ii): the amount of weapon in operation at time point m

Note that the values of and are determined by . For example, we consider a planning with a horizon . Let and and . We suppose that the total selected amount of this weapon type is . The weapon units are developed at the first and the fifth time points and . Then, the values of and can be obtained as follows:

The WSPP subjects to the following constraints.

3.1.1. Maximum Budget for Each Year

Budget for weapon development provided by the government for each year is restricted and denoted as . The capital is available at the beginning of each year. The actual spent capital in each year is denoted as . Similar to Carazo et al. [26], we assume that the capital not spent in previous years can be accumulated and transferred to the next year. The actual available capital for each year is . Then, there are and .

3.1.2. Minimum Amount in Operation

For a solution of weapon planning, some types of weapons cannot form combat effectiveness unless they have a considerably large scale. Specifically, we use to denote the minimum amount of weapon in operation to form combat effectiveness at each time point. If , then the effectiveness value of weapon at time point m is 0. This constraint is similar to a buy-in threshold [10, 27] constraint in portfolio management problems.

3.1.3. Maximum Amount under Development

The maximum amount of a weapon type under development is bounded because of limitation of manufacturing capability of contractors. We use to indicate the maximum amount of weapon units under development, and there is .

3.2. Objective Calculation

In this paper, we focus on trade-off between solution effectiveness and risk.

3.2.1. Calculation of Effectiveness

For a solution of the WSPP, effectiveness needs to be evaluated at the overall level during the whole planning horizon. Usually, defense agencies intend to maximize damage rates of developed weapons [23]. Similar to weapon portfolios, the overall effectiveness can be expressed with an additive value function [4]. However, a simple additive value does not consider effectiveness balance during the whole planning process. In practice, decision makers may prefer a weapon plan which can maintain a high and stable effectiveness during the whole planning horizon. Thus, in this paper, we measure the effectiveness of a weapon plan by the average effectiveness per time unit as well as the deviation of effectiveness during the whole planning.

Furthermore, we assume that the value is decreasing in time unit m. This assumption is reasonable in practice because that the effectiveness of a weapon will decrease due to performance degrade or occurrence of countermeasures. We use the following formulation to capture this characteristic:where is a coefficient to indicate the minimum unit effectiveness during the planning horizon. We set in this research for the sake of calculation.

Then, at each time unit m, the effectiveness is taken as the additive effectiveness value of all weapon units in operation:where if ; otherwise, . The overall effectiveness of a WSPP solution, denoted as , can be obtained by a mean-variance model:where is the mean value over the whole planning horizon and is the standard deviation. Similar to Yamashita et al. [28], parameter β is a variance factor that determines the degree of effectiveness unbalance aversion of the decision maker. Note that is to be maximized.

3.2.2. Calculation of Risk

The risk corresponding to each weapon development is considered to be a nonincreasing function of development time. Specifically, if a weapon unit is planned to be developed at a later time point, the risk it faces will be lower. This is because technology uncertainties of weapon R&D will be decreased with the development of new technologies in the future. Similarly, the overall risk of a solution can be modeled by an additive value function with the time factor. Here, we use the averaged risk of each selected weapon unit to indicate a solution’s risk value:where is the starting development time of th unit of weapon .

It is known that a multiobjective problem (MOP) requires the confliction among different objectives. In our problem, the objective confliction mainly derives from the planning of selected weapon units. For example, if a new weapon will be developed, planning the R&D activity in the future can decrease the risk of failure because of possible technology breakthroughs. However, the effectiveness of the weapon will be lower in the future, compared to the weapon operation at the current period.

3.3. Multiobjective Optimization Model for WSPPs

For WSPPs, is to be maximized and is to be minimized. Without loss of generality, we convert the problem to a minimization format by optimizing negative values of . The model of multiobjective optimization problem of a WSPP can be summarized as follows.

3.3.1. Decision Variables

For an overall solution, decision variables are , . Note that is a nonnegative integer value.

3.3.2. Objectives

Objectives and , respectively, represent maximization of effectiveness and minimization of risk for a weapon selection and planning solution.

3.3.3. Constraints

Constraint is a budget constraint imposed on each year. Constraint reflects the maximum capability of developing each type of weapon.

4. MOEA/D with Distance-Based Divided Neighborhoods

In this research, we solve the problem by multiobjective genetic algorithm based on decomposition (MOEA/D) [29]. MOEA/D decomposes a MOP into a set of subproblems whose objective function can be a weighted linear or nonlinear aggregated function. The optimal solutions of all subproblems constitute the Pareto optimal set of the original MOP. In MOEA/D, all the scalar objective optimization subproblems are simultaneously optimized in a single run. A neighborhood relationship among all the subproblems is defined based on the distances among their weights. It should be noted that various algorithm components of MOEA/D have been investigated for performance improvement [3032]. There are several approaches to construct a scalar optimization function [32]. For simplicity, this paper uses the weighted sum approach.

In this section, we propose a distance-based divided neighborhoods (DDNs) strategy for MOEA/D.

4.1. Motivations

Firstly, we would like to elaborate our motivation by making the following comments:(1)A basic assumption behind MOEA/D is that adjacent subproblems should obtain similar optimal solutions. Although this assumption is reasonable for most function optimization problems, there are some exceptions, especially for some combinatorial problems [33, 34]. If the Pareto front is disconnected or not distributed evenly, parents selected with a constant neighbor size from the neighborhood of a subproblem may have a lower chance to generate promising offsprings. This is because individuals too different from each other are generally less likely than similar individuals to produce fit offspring through mating [35]. The motivation behind the use of nonconstant neighbor size is illustrated by Figure 1.(2)In the original MOEA/D, parents are randomly selected from the neighborhood of a subproblem. However, parents with different domination ranks vary a good deal with the capability of generating new offspring, contributing to both convergency and diversity. This is considerably precise during the early stage of evolving. Figure 2 illustrates possible limitation of the original selection mechanism. We are aware of that this situation depends on the problem characteristics. However, for most combinatorial problems, the final Pareto optimal solutions are rather sparse. For a population with a relative large size, this situation is not scarce during the evolution process.(3)Some variants of MOEA/D address the strategy of population replacement. An alternative is that a new solution could replace any other ones in the whole population [34]. In Zhou and Zhang’s study [36], a new solution replaces the one that can be improved most among all the subproblems. In Wang et al.’s study [37], neighboring solutions of a suitable subproblem are replaced by a new solution with a better scalar function value. Nevertheless, in two situations, an obtained nondominated solution could be replaced by an inferior solution. The first is that the Pareto front is nonconvex, and the second is that each subproblem has different convergency speeds. Furthermore, when the Pareto front is nonconvex, the weighted sum approach could not work well. This shortcoming could be overcome if the search direction of a subproblem is restricted within a relative narrow scope. The motivation of using such a restriction is illustrated by Figure 3.

4.2. Distance-Based Divided Neighborhood Strategy

Previous research studies suggest that the use of different neighborhoods, i.e., mating neighborhood and replacement neighborhood, can achieve better performance for MOEA/D [37, 38]. Along this avenue, in the proposed distance-based divided neighborhood (DDN) strategy, the original neighborhood (o-neighborhood) of a subproblem is divided into selection neighborhood (s-neighborhood) and replacement neighborhood (r-neighborhood). Furthermore, the size of the original neighborhood is restricted and changed according to a pregiven search scope. The DDN strategy is illustrated in Figure 4.

For a solution corresponding to the subproblem i, the basic DDN strategy works as follows:(1)Determines s-neighborhood and r-neighborhood, respectively(2)Mating: select two solutions from the s-neighborhood of and generate a new solution by using genetic operators(3)Replacement: update the r-neighborhood of by replacing at most n solutions with worse scalar function values

Before defining s-neighborhood and r-neighborhood, we introduce a parameter, d, to indicate the search scope of a subproblem. We first determine a reference point . For a solution , a line crossing and can be constructed. Note that, for two objectives problems addressed in this research, the line can be alternatively determined by and its associated weight vector. Then, is defined as the distance from another solution to the line. In the calculation of distances , objective values are normalized for each objective vector by utilizing the obtained maximum and minimum values of each objective. Solutions in s-neighborhood and r-neighborhood are, respectively, defined as follows.

Definition 1. Given a solution , a neighbor is called a selection neighbor (i.e., ) if(1)(2)

Definition 2. Given a solution , a neighbor is called a replacement neighbor (i.e., ) if(1)(2).One can notice from the above two definitions that the original neighborhood is restricted by d and divided according to the scalar function.
Here, we make some remarks on the proposed DDN strategy:(1)Given a subproblem, the DDN strategy divides its neighborhood into s-neighborhood and r-neighborhood. With the subproblem’s weight vector, individuals in s-neighborhood have better values of scalar function than those in r-neighborhood. Parent individuals selected from s-neighborhood will have a good chance to produce better child individuals. Replacing individuals only in r-neighborhood can push the whole population towards the true Pareto front, thus increasing the convergency capability.(2)The individual numbers in the neighborhood change according to the search scope of each subproblem, controlled by the parameter d. The benefits of such a restriction are twofold: firstly, mating can be restricted among similar parent individuals. Secondly, for subproblems with lower convergency speed, potential good individuals can be maintained in the population since each subproblem is only responsible for a specific narrow search direction. Note that, for an individual in the approximated nondomination set during the search process, when d is approaching 0, generation of child individuals can be considered as a local search process.(3)The use of parameter d bears some similarity to the strategy of constrained decomposition in MOEA/D (MOEA/D-CD) [39]. In Wang et al.’ study [39], a constraint is imposed to each subproblem to restrict the improvement region. Then, for a subproblem, the partial order of two solutions is redefined and the replacement is implemented according to the new dominance relation. However, in MOEA/D-CD, the neighborhood size of each subproblem remains fixed. While, in the proposed DDN strategy, the neighborhood size is controlled by the parameter d. Moreover, selection and replacement happen to different regions of the restricted neighborhood.(4)In the proposed DDN strategy, s-neighborhood and r-neighborhood are defined according to scalar function values corresponding to the subproblem’s weight vector. The s-neighborhood of each solution at least includes itself. In such a case, two parents are identical and only mutation operator works. However, for combinatorial problems, it is almost impossible that all subproblems have the same convergence speed. In other words, during the evolution, it is impossible that all individuals in the population are evenly distributed and located in the same on-dominating front. Thus, the s-neighborhood may contain more than one individual and its size is dependent on the parameter d as well as the population distribution during different stages of evolution. In the experimental analysis, the trend of the s-neighborhood’s size during evolution will be reported.(5)To overcome the limitation of standard MOEA/D when solving problems with a complex Pareto front, some research studies employ the strategy of adaptive weight vector adjustment [40, 41]. By adjusting weight vectors, the diversity performance of the algorithm can be improved. The proposed DDN strategy and the weight vector adjustment strategy share the same motivation, i.e., tackling complex multiobjective optimization problems. However, DDN uses a different mechanism. DDN strategy focuses on improving the convergency ability of the algorithm by taking into account the property of irregular Pareto front. With a DDN strategy, the convergency ability of the algorithm will be improved by restricting mating parents when individuals locate in the discontinuous regions of the Pareto front. While for the weight vector adjustment strategy, an algorithm obtains better uniformity of solutions by redistributing the weights of subproblems [40].(6)We are aware that there are several other kinds of scalar function in MOEA/D, such as Tchebycheff approach and penalty-based boundary intersection (PBI) approach. For complex MOPs, Tchebycheff and PBI approach might achieve better performances compared to the weighted sum approach with original framework of MOEA/D. However, the benefit comes with a price. For instance, one has to identify the reference points with the Tchebycheff approach and set the value of the penalty factor with the PBI approach. Determining the optimal parameters is not a trivial task for complex problems. The proposed MOEA/D-DDN provides an alternative for solving complex MOPs by MOEA/D with the simplest scalar function, i.e., weighted sum approach.

4.3. MOEA/D with DDN

The framework of the proposed MOEA/D-DDN is shown as Algorithm 2. The original MOP is decomposed into H scalar objective subproblems.

(1)Algorithm initialization.
(2)while not terminate do
(3)for to H do
(4)  Generate a trial solution for subproblem i using DDN strategy.
(5)  Update the population by the new generated solution using DDN strategy.
(6)  Update the external population.
(7)end for
(8)end while

The algorithm specification of MOEA/D is as follows.

4.3.1. New Solution Generation

There are many ways to generate a trial solution in MOEA/D. In the MOEA/D-DDN, a trial solution is generated by using genetic operators on two randomly selected solutions in s-neighborhood. The genetic operators for the addressed problem will be elaborated in the following section.

4.3.2. Population Replacement

MOEA/D and its variants replace the population according to comparison of scalar objective values of newly generated solution and others in the neighborhood. We use the same replacement mechanism as in Li and Zhang [42]; i.e., at most , solutions in the neighborhood can be replaced. In the MOEA/D-DDN, replacement is restricted in the r-neighborhood of each subproblem.

4.3.3. Summary of MOEA/D-DDN

Based on the above discussion, the inputs of MOEA/D-DDN can be summarized as follows:(i)H: the number of subproblems.(ii)D: the initial size of neighborhood.(iii), the index set of the original neighboring subproblems of subproblem i, where .(iv)d: the distance of search scope.(v): the maximal number of solutions replaced by each child solution.(vi): the set of reference points in objective space, which can be determined by a problem-specific method. Note that .(vii)G: the number of generation as a stopping criterion.(viii)At each generation, MOEA/D-DDN with weighted sum approach maintains.(ix)A population of H individuals .(x), where is the fitness value of .(xi)An external population (EP), which is used to store nondominated solutions found during the search.

One can see from the above discussion that MOEA/D-DDN has only one additional parameter, i.e., search scope distance d. The performances of the algorithm with different values of d will be investigated in Experimental Analysis. Since the proposed MOEA/D-DDN does not change the basic framework of MOEA/D, the computational complexity of MOEA/D-DDN remains the same as original MOEA/D.

5. Genetic Representation and Operators for WSPPs

5.1. Chromosome Representation

In portfolio problems, a hybrid binary/real-valued encoding is recommended as genetic representation by Streichert et al. [43] and also employed in other works [10, 16]. With this encoding, the real-valued vector is used to indicate the share of the budget on different assets or projects, while the binary vector is used to indicate whether the asset is selected or not. For WSPPs, a chromosome representation should include the information of weapon type, amount, and planning time. However, the hybrid representation for portfolio problems does not consider the factor of time horizon. For the problem addressed in this research, we design a chromosome representation consisting of two parts: budget distribution and priority matrix, respectively, shown as in Figures 5 and 6.

Both budget distribution and priority matrix are real-valued. The vector indicates budget shares at each time unit for each year. Elements in the priority matrix, , where , are encoded by real values between interval . A lower value indicates a higher priority. In the initialization, the values of are randomly generated in the interval , and all values are nonredundant. For example, given a single type of weapon with 6 units, a chromosome can be generated and represented as . Then, the second unit of weapon will be selected first if the budget is available.

5.2. Decoding Procedure

A chromosome representation should be decoded to a selection and planning solution to evaluate objective values. At each time unit, the original available budget is determined by the share value , where and . Note that actual available budget can be different due to the accumulation of previous not-spent budget. The selected weapon type and amount are determined according to priorities. The procedure of decoding is presented as Algorithm 1.

Note that the decoding procedure can ensure the satisfaction of constraints and in the multiobjective optimization problem shown in equation (7). We use a simple example to illustrate the decoding procedure. We consider a problem with 12 planning units (1 year) and a single type of weapon with 6 units. The cost for each weapon unit is , and the available budget is . The chromosome of budget distribution is , and the priority matrix is . Then, there are and . At the first time point, due to the constraint of available budget, only one unit of weapon is selected. Specifically, the second unit of weapon is selected. Then, we have and . At the second time point, the available budget is . Then, 3 units of weapon can be selected, i.e., weapon unit 5, 3, and 4. At the second time point, and . Then, we can obtain value of decision variables and .

5.3. Crossover and Mutation

Crossover and mutation are two important operators in genetic algorithms. We use BLX-α crossover as in Streichert et al. [27] since this crossover performed best for portfolio problems. The values of of the two parents are denoted as and , respectively. The BLX-α crossover randomly reinitializes the values of from a extended range , where , , and . We set as in Streichert et al. [27]. A similar crossover operator is employed for the priority matrix.

The mutation operator is executed as follows: for the budget distributing list, a mutation point is randomly determined for each year. Then, a random real value between 0 and 1 is generated and the value at the mutation point is replaced. A weapon type is randomly selected and denoted as . The elements in the priority matrix corresponding to the selected type, , , are replaced by values randomly draw from the distributions . If a new generated value is negative, then we set the value as . Note that this mutation operator is also similar to that in Streichert et al. [27] in which decision variables are mutated by adding a random Gaussian number with a specific deviation.

6. Experimental Analysis

6.1. Test Instance Generation

Since there is no existing literature which addresses exact WSPPs, no benchmark instances can be utilized to test the proposed algorithm. To concretely illustrate the studied problem and investigate the performance of MOEA/D-DDN, different instances were randomly generated. The test bed consists of three sets of instances with different sizes. Table 1 shows parameters of instance generation. Each set includes 5 instances, and the corresponding values were uniformly drawn from the given interval. The main difference among test sets is the size of weapon types. Note that the values in Table 1 are artificial for the reason of confidentiality. However, the identification of interval values also refers to some real cases of weapon selection and planning. Specifically, all ranges of parameters refer to the data from management department of Chinese Army during 2005–2015. Although the test instances are artificial, the results are generalisable to real situations of weapon selection and planning in the future.

6.2. Parameter Settings

The algorithm was implemented with C#. The population size was set as 100, and the initial neighborhood size was set as 20. The rates for crossover and mutation were fine tuned and identified as 0.95 and 0.9, respectively. The maximal number of solutions replaced by each child solution was 2. The algorithm was terminated after 100 generations. In the experiments, all algorithms were run 20 independent times for each instance. For performance comparison, we conducted statistical significance tests for results for a significance level .

6.3. Performance Metrics

Similar to most combinatorial problems, the true Pareto front is unknown. Thus, we use hypervolume and set coverage as performance metrics as in Ke et al. [34]:(1)Hypervolume indicator () [44]: let be a point in the objective space which is dominated by any Pareto optimal objective vectors. Then, the value of for an obtained approximation to the Pareto front is the volume of the region which is dominated by the approximation and dominates . The higher the hypervolume value, the better the approximation. Since in WSPPs, two objectives have different dimensions, values are normalized for hypervolume calculation. In the normalization, the maximum and the minimum values for objective are identified as 0 and −25, and the maximum and the minimum values for are 0.4 and 0. After normalization, the reference point was set as (1, 1) in the calculation of the hypervolume value. Note that if reference points take the range of the problem into consideration, it is not necessary to normalize the objective values of nondominated solutions [45].(2)Set coverage (C metric) [44]: let A and B be two approximations to the Pareto front. is defined as the percentage of the solutions in B that are dominated by at least one solution in A. For instance, indicates that all solutions in B are dominated by some solutions in A. Note that is not necessarily equal to . A higher value of and a lower value of its counterpart means A has a better convergence performance. In the calculation of C metric, duplicate solutions were removed from the obtained nondominated set.

6.4. Sensitivity Analysis

In the proposed MOEA/D-DDN, there is an additional parameter-search scope distance d. We first investigated the performances of different d values. Statistical analyses on hypervolume of obtained Pareto solutions suggest that there is no significant difference for most instances.

Note that hypervolume was normalized with a relative wide range. This may affect the analysis of hypervolume measure. We further investigated the set coverage. We obtained the final nondominated solutions with 20 runs for each value of parameter d. A score is calculated as follows:

Table 2 reports the score for all tested values of parameter d on 15 instances. A lower value of indicates a lower percentage of obtained dominated solutions. In the table, if results with a specific value of d achieve the lowest score, the score value is highlighted with bold face. The results in the table also suggest that optimal value of parameter d varies with different instances. For our benchmark, the parameter of performed best for 7 in 15 instances, while the numbers for are 4, 3, and 1, respectively. Then, if the parameter d needs to be identified for all instances, the value is preferred.

6.5. Effect of Algorithm Components

The proposed MOEA/D-DDN is characterized by two mechanisms: (1) dividing neighborhood as selection neighborhood and replacement neighborhood and (2) using parameter d to control the neighborhood size. In this subsection, we investigated the effects of the above two mechanisms. The algorithm only with divided neighborhoods is denoted as MOEA/D-DivN; i.e., the parameter d is relaxed. However, the algorithm including parameter d based on original MOEA/D is denoted as MOEA/D-DisN. We chose the parameter for MOEA/D-DisN and MOEA/D-DDN. Table 3 reports the obtained results of hypervolume for the three algorithm versions, as well as original MOEA/D. For each instance, the best value, the average, and the standard deviation are reported. For the value obtained by MOEA/D-DDN, if it is significant better than one of the other three algorithms, the average is presented in bold face. From Table 3 we can see that MOEA/-DDN performed best for all instances. However, the performances of MOEA/D-DivN, MOEA/D-DisN, and MOEA/D are comparable. It suggests that the combination of two algorithm components of MOEA/D-DDN is effective for the addressed problem.

6.6. Algorithm Dynamics

Compared to original MOEA/D, the main difference of MOEA/D-DDN lies in the change of divided neighborhoods. The sizes of s-neighborhood and r-neighborhood vary during the evolution process. In this subsection, we investigated the dynamics of neighborhood sizes for individuals. The data of the first run for instance 10 was used for analysis. We first present two different patterns of neighborhood size variation. The 30th and the 96th individuals in the population were identified. The neighborhood sizes are shown as Figures 7 and 8, respectively.

Typically, a lower value of s-neighborhood might indicate a better convergence performance for an individual, and vice versa. One can see from Figure 7 that, for the 30th individual, the trend of s-neighborhood size is decreasing with evolution. At the beginning of the evolution process, the s-neighborhood size was 13, while the value remained 1 when the algorithm ended. This indicates that the 30th individual had a poor performance of nondominated ranking in the initial population. However, this performance spirally improved during the evolution. A change of s-neighborhood size from a higher value to a lower value represents the improvement of nondominated ranking performance. Figure 8 shows a different pattern. The 96th individual had a better nondominated rank in the initial population, while the performance was decreasing along with the evolution. Figure 9 shows the population at the 1st, 50th, and 100th generation, denoted as dot, triangle, and star, respectively. The positions of the 30th and 96th individuals are, respectively, marked by hexagon and circle. The 96th individual was positioned at the first nondominated rank in the initial population. However, it was dominated in the 50th and 100th generations. However, the situation of the 30th individual is opposite.

Here, a question why good solution in the initial population did not contribute to the convergence performance arises. Recall that similar to original MOEA/D, MOEA/D-DDN evenly generated weight vectors for each individual, and then the population was initialized randomly. The weight vector remains unchanged during the whole evolution process. Possibly, the weight vector assigned to an individual is not the most appropriate one for the individual itself. For instance, in the initial population, the 96th individual almost lied in the middle of the population in the objective space. However, the assigned weight vector decides that the 96th individual searches along the aside direction. In other words, this type of individuals have longer paths to search in the objective space. Given evenly allocated computation resource, the performance might be worse.

Then, we assume that different parts of individuals in the population contribute differently to the convergence performance. We selected 3 sets of individuals in the population. Set 1 includes the first 10 individuals, set 2 includes individuals from 45 to 54, and set 3 consists of the last 10 individuals. The s-neighborhood sizes for each individual set during the evolution are, respectively, reported in Figures 1012. A common feature shared by 3 sets is that the change frequency of s-neighborhood size decreased with evolution. This indicates the convergence of the population. By looking at initial populations, one can see that the spreads of s-neighborhood size for each set are similar. For sets 1 and 3, s-neighborhood sizes increased at the end of the algorithm for most individuals. However, the same values decreased for most individuals in set 2. Given an evenly spread of the population, a lower value of s-neighborhood size usually indicates a better value of nondomination rank for an individual. Then, it might be safe to state that, given evenly assigned weight vector and randomly generated initial population, it is more difficult to put individuals located on end parts of the approximated Pareto front towards the true Pareto front. Some works have addressed similar issues. Li et al. [46] proposed a stable matching model for selection process of MOEA/D. This technique can be employed, and the performance of MOEA/D-DDN might be improved further for more complicated problems. However, this paper focus on the effect of distance-based divided neighborhoods for combinatorial problems. Then, further improvement of MOEA/D-DDN is not considered currently but left in future researches.

6.7. Further Performance Comparison

MOEA/D and NSGA-II are two of the classic algorithms for solving multiobjective problems. The existing literature suggests that MOEA/D outperforms or performs similarly to NSGA-II for continuous and combinatorial multiobjective problems [29]. Recently, combinations of decomposition mechanism and NSGA-II components were successfully used to solve combinatorial problems [47]. Latest research studies also suggest the effectiveness of NSGA-II for solving similar scheduling problems [4850]. Thus, in this paper, we choose NSGA-II as the baseline algorithm for comparison. In this subsection, we compared the performance of MOEA/D-DDN and NSGA-II as well as original MOEA/D. All parameters were set as the same with MOEA/D-DDN. Figure 13 reports the obtained approximation of Pareto optimal solutions by MOEA/D-DDN, MOEA/D, and NSGA-II for all instances. It is clear that MOEA/D-DDN achieved the best results in terms of convergence for all instances. However, it should be noted that, for some instances, NSGA-II performed better in terms of maximum spread, for example, for instances 1, 3, 14, and 15. The performances of MOEA/D and NSGA-II are comparable for both convergence and maximum spread. Then, we can conclude that the proposed MOEA/D-DDN can solve the addressed WSPPs better, compared with original MOEA/D and NSGA-II.

7. Conclusion

In capability planning area, decision makers face the problem of deciding the type and the amount of weapons. Furthermore, decision makers have to plan when to develop each unit of weapons. This type of problem is termed as weapon selection and planning problems (WSPPs). This paper refines the multiobjective model of WSPPs. The problem is combinatorial, and multiple types of decision variables increase the problem’s complexity. MOEA/D is employed as the basic optimizer for solving the problem. However, the ability of original MOEA/D with weighted sum is limited due to the discontinuous, nonconvex, and irregular Pareto front. To overcome the possible drawbacks or MOEA/D, two mechanisms are designed: (1) dividing the original neighborhood of each individual as selection and replacement neighborhoods and (2) controlling the neighborhood sizes by constraining the search scope of each subproblem. The proposed algorithm is termed as MOEA/D-DDN. The neighborhood is divided by means of scalar function value. An additional parameter d is introduced to control the search scope of each subproblem.

To illustrate the addressed problem and investigate the performance of MOEA/D-DDN, benchmark instances with different scales were generated. Sensitivity analysis of d suggests that the optimal value of d needs to be identified for different problems. This might be a common challenge arisen to evolutionary algorithms. This issue is worthwhile to be addressed in future researches. The effectiveness of two designed mechanisms has been investigated. The performance of MOEA/D-DDN for solving WSPPs was compared with original MOEA/D and NSGA-II. The results suggest that MOEA/D-DDN performed better than other algorithms.

Currently, various MOEAs have been developed and applied to solve problems in different areas. However, most benchmarks are continuous problem. The ability and possible drawbacks of various MOEAs for complex combinatorial problems are still open issues. It might be interesting to address these issues in future researches.

Data Availability

The data used in the paper includes generated benchmarks and obtained results. All data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grants 71501181 and 71871185.