Abstract

A multiagent evolutionary algorithm is proposed to solve the resource-constrained project portfolio selection and scheduling problem. The proposed algorithm has a dual level structure. In the upper level a set of agents make decisions to select appropriate project portfolios. Each agent selects its project portfolio independently. The neighborhood competition operator and self-learning operator are designed to improve the agent’s energy, that is, the portfolio profit. In the lower level the selected projects are scheduled simultaneously and completion times are computed to estimate the expected portfolio profit. A priority rule-based heuristic is used by each agent to solve the multiproject scheduling problem. A set of instances were generated systematically from the widely used Patterson set. Computational experiments confirmed that the proposed evolutionary algorithm is effective for the resource-constrained project portfolio selection and scheduling problem.

1. Introduction

The project portfolio selection problem (PPSP), together with its various extensions, has been widely studied during the last decade. Given a set of project proposals and constraints, the traditional PPSP is to select a subset of project proposals to optimize the organization’s performance objective [1]. Mathematic models have been proposed in the literature. The project portfolio profit is regarded as the natural performance objective and utilized by most models, for example, the zero-one integer programming model [2]. Since the PPSP is an NP-hard problem [3], metaheuristic algorithms such as evolutionary approaches are widely used [4].

Most studies on the PPSP generally dissever the inherent relationship between portfolio selection and project scheduling. The traditional PPSP is based on some assumptions. It is assumed that an individual project has a fixed and unchangeable schedule [5]; hence, only the project selection decision is considered to impact the final portfolio profit. However, project scheduling tends to affect the portfolio feasibility by adjusting the start and completion time of its activities [6]. Especially when resources are constrained, scheduling of project activities helps to better utilize the limited resources, and consequently to increase the portfolio profit [5]. Inclusion of project activity scheduling as a subproblem of project portfolio selection helps improve the overall organization performance even though it increases the complexity of decision making. This combined problem is termed as the resource-constrained project portfolio selection and scheduling problem (RCPPSSP). The RCPPSSP can be described as a problem to select an optimal portfolio of projects and schedule their activities to maximize an organization’s stated objectives without exceeding available resources or violating other constraints [7].

The RCPPSSP has attracted increasing attention in recent years as a new research problem. Owing to the dual level structure of the RCPPSSP, most algorithms in the current literature are also composed of two parts. In the upper level, decisions are made to select project portfolios. In the lower level, procedures of multiproject scheduling are adopted to improve the performance of the selected portfolio. Due to the NP-hard nature of the project portfolio selection problem, researchers developed heuristics and metaheuristics to improve the solution quality and computational efficiency. For example, an implicit enumeration procedure was developed for all possible project priority sequences with high profit [7]. An ant colony optimization (ACO) based on the max-min ant system [8] was proposed, in which solutions were encoded as walks of agents in a construction graph, and transition probabilities were computed to determine the probability of an arc of the graph being chosen by the agents in the next iteration [8]. An iterative multiunit combinatorial auction algorithm [5] was also used to select project portfolios through a distributed bidding mechanism. In the lower level, heuristics such as greedy heuristic [8] and priority rule-based heuristics [9] are widely adopted for multiproject scheduling.

In recent years, agent-based computation has been widely applied in distributed problem solving. An agent is a self-contained problem solving entity [10] which exhibits the properties of autonomy, social ability, responsiveness, and proactiveness [11]. In a multiagent optimization system (MAOS) [12], self-organization agents [13] interact to optimize their own problem solving with limited declarative knowledge and simple procedural knowledge under ecological rationality [12]. Specifically, agents explore in parallel through three types of interactions, namely, cooperation, coordination, and negotiation [14]. Since interactions among the agents contribute to solution diversity and rapid convergence in some cases [15], it is recommended to embed the MAOS in evolutionary algorithms to improve the solution quality [12, 1518]. Recently, multiagent evolutionary algorithms have been used for single project scheduling [19].

The objective of this paper is to develop a multiagent evolutionary algorithm for the RCPPSSP. The master procedure in the upper decision level is designed by combining the neighborhood competition operator and self-learning operator in the multiagent system. A priority rule-based heuristic is adopted for the subprocedure in the lower decision level. In Section 2, we present the resource-constrained project portfolio selection and scheduling problem and its mathematical model. Section 3 explains the multiagent evolutionary algorithm that we have developed for the RCPPSP. Computational experiments and results are discussed in Section 4. Finally, we conclude this paper in Section 5.

2. Project Portfolio Selection and Scheduling

The objective of the resource-constrained project portfolio selection and scheduling problem is to maximize the project portfolio profit. The problem can be recognized as a dual level decision problem [8].

The upper level is to select feasible project portfolios under resource constraints. There are a set of candidate projects from which we select the optimal portfolio. A pool of types of limited and renewable resources is available for all projects. It is assumed that there is no other relationship among the projects besides resource competition.

The lower level is to solve the multiproject scheduling problem, that is, to determine the start (completion) time of each activity without violating the precedence relations or resource constraints [5]. Given the resource constraints, scheduling project activities within a portfolio helps shorten the project duration and increase the portfolio profit since the project profit is a decreasing function of the project completion time [7].

Two sets of decision variables are designed in this paper: is for project selection and for project activity scheduling, as shown in the following formulae:

The notations used in this paper are listed in Notations for the RCPPSSP section.

The RCPPSSP can be formulated as a 0-1 integer programming model [5, 7]:

The objective (2) is to maximize the total profit of the selected project portfolio. Constraint (3) ensures that all activities of a selected project are completed. It also enforces that every activity in unselected projects is not executed. Constraint (4) is to guarantee the selected project is complete before its deadline. Constraint (5) describes the precedence relations among activities which requires an activity to start only after all its predecessors have been completed. Constraint (6) ensures that in each time period the demand on any resource does not exceed its capacity. Formulae (7) and (8) declare the decision variables.

3. Multiagent Evolutionary Algorithm

To solve the RCPPSSP, a multiagent evolutionary algorithm (MAEA) is proposed. Corresponding to the dual level structure of the RCPPSSP, the MAEA has two levels as well. The master procedure for the upper level is to select project portfolios and a priority rule-based heuristic [20] is designed as the subprocedure for the lower level to do multiproject scheduling.

3.1. Project Portfolio Selection

The multiagent evolutionary algorithm is a combination of two theories: multiagent systems and evolutionary algorithms [17]. Generally, a multiagent system [15] is composed of an environment, a set of objects, a set of agents, a set of relations between objects (agents), and a set of operations. During observation of the environment and interaction with other agents, the fitness value of an agent can be estimated and optimized on the basis of the possessed resources, abilities, and knowledge [17].

In this paper a multiagent evolutionary algorithm was proposed to solve the project selection problem. We designed a multiagent system in which each agent selects project portfolios according to its own preferences and environment. The evolution of the agents is realized by the neighborhood competition and self-learning operators. In neighborhood competition, loser agents will be replaced by new generated agents. In this way, the information and knowledge of individual agents will spread to the whole system. The winner agents will conduct self-learning by applying its own knowledge, for which a simple genetic algorithm is developed. Since most multiagent systems adopt the real-valued representation which is not appropriate for the project selection problem, we designed an agent system based on a discrete representation and modified the operators correspondingly.

3.1.1. Multiagent System

The objective function (2) of the RCPPSSP can be simplified as the following formula: where (x) denotes the portfolio profit which is equal to   and is an -dimensional search space of the project selection problem. The “x” in boldface represents a vector which is a candidate solution in the search space. The component “” is a 0-1 variable which takes the value of 1 when project is selected or the value of 0 otherwise.

An agent for the RCPPSSP can be defined as follows.

Definition 1. An agent denoted as represents a candidate solution to the RCPPSSP. The value of its energy is equal to its value of the objective function in (2):

The agent living in an environment makes decisions autonomously to increase its energy as much as possible. To realize the local perceptivity of agents, the environment is organized as a lattice-like structure which can be defined as follows [16].

Definition 2. All agents live in a lattice-like environment denoted as . The size of is . is determined as , where .

Figure 1 illustrates an agent lattice. In the agent lattice, an agent denoted as is fixed on a lattice point , where .

3.1.2. Neighborhood Competition Operator

In the agent lattice, agents compete with their neighbors to gain more resources so that their purposes, that is, objectives of the RCPPSSP, can be achieved. Reference [16] noted that the neighborhood competition operator facilitates information diffusion to the whole lattice. To describe the neighborhood competition operator, we define the neighborhood as follows.

Definition 3. All agents with a line or diagonal line connecting to agent constitute the neighborhood of agent . The competing neighborhood of is denoted as . The perceptive range of an agent’s competing neighborhood determines the number of competing neighbors as , where .

For example, when is equal to 1, the number of agents in the neighborhood is 8.

The basic rule for neighborhood competition operator is defined as follows [16].

Rule 1. If the agent satisfies (11), it is a loser; otherwise, it is a winner:

The winner survives in the agent lattice, but the loser perishes and is replaced by a new agent generated from the local-best agent which is defined as

Two alternative strategies [16] are adopted to generate the new agent .

Strategy 1. A set is composed of sequence numbers of the positions where the agent takes different values from agent ; that is, . The new agent is determined by formula (13), in which and Random takes the values of 0 or 1 randomly

One example of Strategy 1 is presented in Figure 2.

Strategy 2. Mutation in the evolutionary algorithm is adopted to transform agent to , which is represented in formula (14). In formula (14), and Random is generated randomly in the interval of (0,1)

One example of Strategy 2 is presented in Figure 3.

In this paper a uniform random parameter is used to determine which strategy is to be applied to generate the new agent. Firstly, we calculate the similarity between the loser agent and the local-best agent with maximum energy in the neighborhood by the following formula: where . The higher the value of is, the more similar to is and the much lower the chance to get a better solution through Strategy 1 is indicated.

Therefore, the rule to select an appropriate strategy is designed as follows.

Rule 2. When the value of satisfies (16), Strategy 1 is adopted to generate new agents. Otherwise, Strategy 2 is adopted:

3.1.3. Self-Learning Operator

In order to survive in competition, agents in the lattice may take actions to increase their energy by using their own knowledge [21]. The self-learning operator [16, 21] is designed to help agents achieve this purpose. It is assumed that only winner agents have the chance to conduct self-learning.

Definition 4. All agents with a line or diagonal line connecting to agent constitute the neighborhood of agent . The self-learning neighborhood of is denoted as . The perceptive range of an agent’s self-learning neighborhood determines the number of self-learning neighbors as , where .

The basic rule for the self-learning operator is defined as follows [16, 21].

Rule 3. If agent satisfies (17), it has the chance to execute the self-learning operator:

A simple genetic algorithm (SGA) [22] is adopted to realize the self-learning of agent , in which the chromosome takes the same representation as an agent. The th chromosome in the th generation is denoted as follows: where and . MaxGA denotes the maximum number of iterations and PopSize represents the population size. The fitness function of a chromosome is equal to the energy of its corresponding agent.

In the self-learning process, the initial population is generated as follows. The first chromosome in the initial population is equal to agent and all other chromosomes are generated by the following formula: where is determined by

Three types of operators are applied to generate the new population in each generation, including selection, crossover, and mutation. Binary tournament [23] is used for selection. The elitist strategy [24] is employed to guarantee convergence of the genetic algorithm. One-point crossover is adopted. A crossover point is randomly selected and then the genes in the two parent chromosomes after the crossover point are exchanged to generate two offspring chromosomes [25]. The mutation locus is also selected randomly. The selected gene is then mutated by negation. The probabilities of crossover and mutation are denoted as and respectively.

3.1.4. Repair Mechanism

In the case of resource constraints and multiproject scheduling, it is possible that some agents (solutions) are infeasible. Especially the neighborhood competition operator and self-learning operator may lead to infeasible agents. Therefore, repair mechanisms are necessary and have been proposed in the relevant literature [8].

In this paper, the infeasible agents are repaired by removing projects in sequence under a certain rule. In an RCPPSSP, the profit of a candidate project depends on its completion time which is unknown in advance, so it is inconvenient to apply other rules except the random rule [8]. Given an infeasible portfolio, a random project is selected and removed from the portfolio. This process continues until feasibility is achieved. Once the repaired portfolio is feasible, the new agent representing the repaired feasible portfolio is used to replace the incumbent infeasible agent.

3.2. Multiproject Scheduling

A priority rule-based heuristic is applied to schedule the selected projects in the lower decision level. Using the minimal slack (MINSLK) priority rule [26] and the serial schedule generation scheme (SSGS) [27], a multiproject schedule is generated for the selected portfolio. In a multiproject environment, the slack (SLK) of an activity is computed as follows: where LF and EF are the latest and earliest finish times, respectively, which are estimated by the critical path method (CPM).

Suppose a subset Φ is selected in the master procedure. In project , there are activities which are denoted as . The SSGS heuristic consists of stages. In each stage , two disjoint activity sets are identified [27]. The scheduled set includes the activities which are already scheduled; and the decision set contains the unscheduled activities whose predecessors are all in the scheduled set . According to the MINSLK priority rule, the activity with the minimum slack is selected from the decision set and scheduled as early as possible without violating the resource constraints. The activity is then moved from the decision set to the scheduled set. Algorithm 1 shows the pseudocode of the priority rule-based heuristic.

Procedure of the priority rule-based heuristic
BEGIN
INIT:
FOR TO
   COMPUTE
   SELECT from / according to the MINSLK rule /
   ASSIGN / as early as possible /
   
ENDFOR
END

To determine the feasibility of the multiproject schedule, we set a upper bound for each project’s completion time, denoted as . Specifically, for project in the selected portfolio Φ, if its completion time goes beyond its upper bound , the portfolio Φ is recognized as infeasible and shall be repaired.

4. Computational Experiments and Results

This section presents the experiment design and computational analyses for investigating the performances of the proposed multiagent evolutionary algorithm for the RCPPSSP. Based on the design of experiment (DOE) approach, a set of instances was generated systematically. Then parameter configurations of the algorithm were set up through testing of examples. The proposed MAEA was then compared with other algorithms in the literature.

4.1. Experiment Design

An RCPPSSP instance consists of a pool of candidate projects, a set of profit profiles of all projects, and the resources available for all projects. Three project pools with 10 projects in each pool and three other pools with 20 projects in each pool were generated randomly by 72 instances with three types of resources and different networks selected from the widely used Patterson set [28]. The above six project pools are denoted as PAT10_1, PAT10_2, PAT10_3, PAT20_1, PAT20_2, and PAT20_3, respectively.

It is assumed that a project achieves its base profit if it is complete at its critical path length (CPL) and the profit decreases with a profit decreasing rate as its completion time increases. Referring to [7], the base profit and actual profit of project are calculated by formulae (22) and (23), where denotes the resource utilization coefficient which is subject to a uniform distribution in the interval of . The upper bound of project completion time is based on the critical path length by formula (24), and the relaxation rate has the value of 0.4 in this paper. Resource tightness is introduced to estimate the capacity of resource from its maximum resource demand in the critical path method:

Two levels of profit decreasing rate   (2% or 8%) and resource tightness (30% or 60%) were designed, which forms four experiment cells as shown in Table 1. This 22 experimental design crossed with our six project pools yielded 24 instances to test the proposed algorithm.

4.2. Parameter Configurations

Parameters of the proposed MAEA were determined through experiments, including the maximum generations of agents MaxMA, the lattice size , the coefficient , and the perceptive ranges and . The parameters of the SGA for self-learning were also assigned, including the maximum generations MaxGA, the populationsize PopSize, the crossover probability , and the mutation probability .

The number of generations and the lattice size were determined firstly. With the increasing size of the agent lattice and more generations of evolution, the multiagent algorithm is more likely to find the optimal solution in a longer computation time. Through testing on examples, MaxMA was set at 100 in this paper. The suitable lattice size is proportional to the number of candidate projects. We estimated by a coefficient in this paper. Since the designed 24 instances have 10 or 20 projects, the lattice size takes the values of 5 or 9, respectively.

According to Definitions 3 and 4, when , both perceptive ranges and belong to the set . Similarly, when , both and belong to the set . Our testing showed that the proposed MAEA has a good performance when .

The coefficient is used to select the strategy of generating new agents to replace loser agents. When is larger, Strategy 1 is more likely to be applied, which means the new agent will be more similar to the best agent in its neighborhood. Consequently, the multiagent algorithm may be easily trapped in a local optimum. However, the stability of the algorithm will be affected when is much smaller. To tradeoff between the convergence and stability, we set to 0.25 according to our experiments.

In summary, the parameters for the proposed multiagent evolutionary algorithm were determined as follows: , , , , , , , and .

4.3. Computational Results and Comparison

For the above 24 instances, the profits achieved by the proposed MAEA are shown in Table 2, as well as the results of two other benchmarking algorithms in [29], namely, Ranking and MKP_ranking methods.

In the Ranking method, all candidate projects are ranked by a certain priority rule. Projects are then scheduled one by one according to their priorities until the next project makes the portfolio infeasible. It was reported that the greedy “max profit” rule performs best [29] and hence was used in this paper.

The MKP_ranking method also involves two stages [29]. In the first stage, the project selection problem is solved as a multidimensional 0-1 knapsack problem (MKP) and then all selected projects are prioritized by the “max profit” rule. In the second stage, all projects selected in the first stage are scheduled sequentially. In case a selected project cannot be scheduled before its deadline due to resource constraints or it has a negative profit, the project is removed from the portfolio.

The MAEA and benchmarking algorithms were implemented in C language on a PC with a CPU at 2.0 GHz and 2 GB physical memory. The average computation times are shown in Table 3. It is obvious that the profit decreasing rate has a significant role in determining the computation time of the multiagent evolutionary algorithm. If the profit decreases faster after the project’s critical path length as in Cell 3 and Cell 4, the algorithm takes a much longer time to search for portfolios with projects complete in time.

The average profits of 24 instances achieved by the three algorithms are 7486.85, 7489.24, and 8120.70, respectively. It is observed that the MAEA has a higher average profit than the other two methods.

To investigate the performance of the proposed MAEA, the Wilcoxon Signed Ranks Test was applied to analyze the data in Table 2. Table 4 shows the paired comparison outcomes of statistical analysis. The profits obtained by the MAEA are significantly higher than the other two methods at a significance level of 0.001.

5. Conclusions

In this paper, the resource-constrained project portfolio selection and scheduling problem is formulated as a 0-1 integer programming model. The problem has a dual level structure. Project scheduling in the lower level helps increase the portfolio profit by improving the resource allocation among selected projects and rescheduling their activities. A multiagent evolutionary algorithm is proposed to solve the RCPPSSP. The algorithm adopts a dual level structure owing to the nature of the RCPPSSP. In the upper level, agents in an agent lattice are designed to search for feasible portfolios automatically. The neighborhood competition operator and self-learning operator are integrated to accelerate the evolution of agents. In the lower level, each agent adopts a priority rule-based heuristic to conduct multiproject scheduling to better utilize the scarce resources. We conducted an experiment to test the performance of the proposed algorithm. A set of 24 instances were generated from the Patterson set systematically. Computational results show that the proposed multiagent evolutionary algorithm has an outstanding performance.

Notations

Set of candidate projects
:Project index, , where denotes the number of candidate projects
:Activity index, , where is the number of activities in project
:Time index, , where is the upper bound of project completion time
:Resource index, , where is the number of resource types
:Set of selected projects,
():Activity in project
:Duration of activity
:Set of immediate successors of activity
:Start time of activity
:Completion time of activity
:Earliest finish time of activity
:Latest finish time of activity
:Set of activities in process at time
:Capacity of renewable resource
:Quantity of resource required by activity for its execution
:Profit of project when it is complete at time
:Upper bound of the completion time of project .

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors are grateful to the anonymous reviewers for their constructive comments. The paper is supported by National Natural Science Foundation of China (Grant no. 71072119) and Zhejiang Provincial Natural Science Foundation of China (Grant no. R7100297).