Abstract

Complex engineering system optimization usually involves multiple projects or tasks. On the one hand, dependency modeling among projects or tasks highlights structures in systems and their environments which can help to understand the implications of connectivity on different aspects of system performance and also assist in designing, optimizing, and maintaining complex systems. On the other hand, multiple projects or tasks are either happening at the same time or scheduled into a sequence in order to use common resources. In this paper, we propose a dynamic intelligent decision approach to dependency modeling of project tasks in complex engineering system optimization. The approach takes this decision process as a two-stage decision-making problem. In the first stage, a task clustering approach based on modularization is proposed so as to find out a suitable decomposition scheme for a large-scale project. In the second stage, according to the decomposition result, a discrete artificial bee colony (ABC) algorithm inspired by the intelligent foraging behavior of honeybees is developed for the resource constrained multiproject scheduling problem. Finally, a certain case from an engineering design of a chemical processing system is utilized to help to understand the proposed approach.

1. Introduction

Nowadays, complex engineering projects or design processes with long development times usually involve multiple disciplines and a great deal of effort. In general, the difficulties in design or development do not only simply arise from engineering complexity but also lie in the organizational sophistication necessary to manage this design or development process. Therefore, it is very important to optimize the design or development process. Usually, a complex system includes a large number of tasks or subprojects, and complex dependencies existing among tasks will cause resource competition and coordination. It means that in order to understand the implications of connectivity on different aspects of system performance, it is necessary to model the task dependencies and then sequence all project tasks to reveal the underlying structure of the design or development process.

In most of the literatures, graphs or directed graphs such as flow charts and signal flow diagrams are used to analyze complex processes. For example, digraphs are easy to assimilate until they become quite large and lose their intuitive [1]. Related to the diagraphs, adjacency matrices expedite the decomposition of large systems since computers can easily handle the matrices [2]. In this paper, we focus on two main issues, that is, task clustering as well as its sequencing, where there are more researches concentrating on the first one. For instance, Tseng and Jiao [3] proposed an approach through clustering analysis of a design matrix based on axiomatic design theory and aimed at implementing modular electrical design of electronic products at the system design level. Gershenson et al. [4] discussed the incorporation of modularization into mechanical designs. They also developed the measure of relative modularity and the modular design methodology that encouraged modularity and prevented a cascade of product design changes due to changes in life-cycle process. Brændeland et al. [5] presented a modular approach to the modeling and analysis of risk scenarios with dependencies such as the electric power supply or telecommunications. In addition, this approach might be used to deduce the risk level of an overall system from previous risk analyses of its constituent systems. Another issue related to the paper is task scheduling, and some representative pieces of research are as follows. Some scholars [68] developed a branch and bound approach to solve the task scheduling problem and the differences among them lay in branch schemes as well as elimination rules and other details. Mori and Tseng [9] proposed a genetic algorithm for multimode resource-constrained project scheduling problems (RCPSPs) and compared it with a stochastic scheduling method proposed by Drexl and Gruenewald [10]. Xu et al. [11] illustrated how to combine the idea of rollout with priority rule heuristics and justification for the RCPSP and examined the resulting solution quality and computational cost. They presented empirical evidence that these procedures are competitive with the best solution procedures described in the literature. In addition, Jarboui et al. [12] designed a combinatorial particle swarm optimization (CPSO) algorithm in order to solve RCPSP. The results that have been obtained using a standard set of instances, after extensive experiments, proved to be very competitive in terms of number of problems solved to optimality. Ju and Chen [13] developed design structure matrix (DSM) and an improved artificial immune network algorithm to solve a multi-mode resource-constrained multiproject scheduling problem (MRCMPSP). Moreover, this approach was also tested on a set of random cases generated from ProGen, and the results validated the effectiveness of the proposed algorithm comparing with other famous metaheuristic ones.

However, these researches mentioned earlier only considered one aspect of complex engineering system optimization and neglected connections between task dependencies and task sequencing. Due to these reasons, in this work, we propose a dynamic intelligent decision approach to dependency modeling of project tasks in complex engineering system optimization. It takes this decision process as a two-stage decision-making problem, where in the first stage, large interdependent tasks are decomposed into smaller and manageable task groups by transforming the binary form of task relationships into the quantifiable numerical one. In this stage, three steps are needed. Firstly, DSM is adopted to model dependencies between tasks. And then, a two-way comparison scheme is used to transform the binary DSM into numerical one. Secondly, task clustering based on indices including bid value as well as coordination cost function is realized so as to discompose the large-scale project into some subprojects. Finally, a method for solving task clustering is also developed. In the second one, according to the result of task clustering developed in the first stage, the resource constrained multiproject scheduling model is built firstly, subsequently, a discrete artificial bee colony (ABC) algorithm inspired by the intelligent foraging behaving of honeybee is designed for solving this problem. The whole flow chart of this dynamic intelligent decision approach is shown in Figure 1.

This paper is organized as follows. Firstly, the approach to project task clustering in complex engineering system is proposed in Section 2. Secondly, the mathematic model of multiproject scheduling problem based on the result of task clustering analysis is developed, and a discrete artificial bee colony based multiproject scheduling problem is also proposed in Section 3. Thirdly, an illustrative case from an engineering design of a chemical processing system is given in Section 4. Finally, conclusions and possible future research extensions comprise Section 5.

2. An Approach to Task Clustering in Complex Engineering System

Optimization process in a complex system faces the difficulties not only from technology complexity but also from time pressure. Generally, to decompose large interdependent task groups into small ones by modularization is a very efficient approach to realize engineering optimization. This process usually includes three steps, that is, task dependency modeling, task clustering, and problem-solving strategies.

2.1. Task Dependency Modeling

In this paper, the task-based modeling method is adopted. Here, the design structure matrix representation of the design process is chosen for three reasons. Firstly, it overcomes the size and visual complexity of many graph-based techniques such as PERT and CPM. Secondly, matrices are easy to manipulate and store in computer. Finally, the DSM modeling has been proven by a number of researchers as a useful tool in task scheduling and management [14]. It has been widely applied to design and development [15], such as design optimization [16], tasks sequencing [17], control and monitoring of design tasks [18], and risk analysis and evolution [19].

In general, a simple DSM displays the relationships between components of a system in a compact, visual, and analytical advantageous format. Specifically, in DSM, each row and its corresponding column are identified with the identical labels. Along each row, the marks indicate what other elements the element in that row depends on. A certain column indicates what other elements the element in that row provide to. Diagonal elements do not convey any meaning at this point. Thus, in Figure 2, element A provides something to element B, D, and E, and it also depends on something from element C and E.

In the engineering system optimization process, the typical clustering process is comprised of two sections: one is to define modules it includes, and the other is to define the relationships among different modules which will realize the whole function of a system together. Consider the implications of system performance as well as the characteristics of the DSM, there are two steps needed to model task dependencies: (1) transforming the binary format of task relationships into the quantifiable numerical ones so as to represent dependency strength among project tasks from viewpoints of structure, function, shape, and so on; (2) building up multidimension DSM model and then normalize these multi-dimension data.

2.1.1. Quantitative Approach to Dependencies among Tasks

Dependencies among tasks are determined by their function, structure, shape, and so on. However, the original DSM is populated with “ones” and “zeros” or “X” marks and empty cells. This single attribution was used to convey relationships between different elements namely, the “existence” attributes which signifies the existences or absences of a dependency between the different elements. Compared to binary DSM, numerical DSM could contain a multitude of attributes that provide more detailed information on the relationships between the different system elements. An improved description of these relationships provides a better understanding of the system and allows for the development of more complex engineering system. In this work, we introduce a two-way comparison scheme developed by Su et al. [20] to realize normalization of the binary DSM. The main criteria of this approach are to perform pairwise comparisons in one way for tasks in rows and in another way for tasks in columns to measure the dependency between different tasks. In the row-wise perspective, each task in rows will serve as a criterion to evaluate the relative connection measures for the nonzero elements in that row. It means that for each pair of tasks compared in rows, which one provides more information input than another. Similarly, in the column-wise perspective, each task in columns will serve as a criterion to evaluate the relative connection measures in that column.It also means that for every pair of tasks compared in columns, which one receives more information output than another. Moreover, in order to obtain comparison scale, we also use a single level of analytic hierarchy process (AHP). For example, if ranking of information input for tasks in row is the criterion for comparison matrix, the comparison scale 3 for comparing to denotes that task provides somewhat information input to task than task provides. Conversely, the comparison scale 1/9 for comparing to denotes that task provides much less information input to task than task provides. This two-way comparison scheme contains four phases described as follows.

(1) Select a criterion for every pair of tasks compared and compare which one provides more information input to downstream tasks or which one receives more information output from upstream tasks.

(2) Construct a pairwise comparison matrix as follows: where or is a criterion for the comparison matrix selected from rows or columns; - is the comparison scale for compared with when selecting or as a criterion, and . In addition, .

(3) In order to obtain the relative connection measures between the related tasks in rows and in columns, an eigen-vector is calculated for each pairwise comparison matrix. This eigen-vector denotes the ranking for each comparing task within the comparison matrix.

(4) There are totally eigen-vectors for rows and eigen-vectors for columns. Combining n rows of eigen-vectors, an matrix is formed in which each element in the vector is placed back in its location in the original binary DSM matrix. In addition, another matrix is formed in the same way. The element in matrix and the one in matrix should be combined together to measure all relationships. This may be realized by multiplying the corresponding element of and matrices and taking their geometrical average. In doing so, a numerical DSM matrix which describes quantitative relationships among tasks will be obtained.

2.1.2. Normalized Treatment of Multidimension DSM Model

Numerical DSM model obtained from Section 2.1.1 is a multi-dimensional matrix. Each element in it contains multiple components which defines information relationships among tasks from different viewpoints. Generally, it is difficult to further deal with the multi-dimensional matrix. In order to solve this problem, a dimensionality reduction method for the multidimensional matrix is proposed, and the corresponding expression is as follows: where . is a new element value in DSM after dimensionality reduction. represents analysis viewpoints such as space structure, function, shape, and physical interface. is a dependency strength between task and from viewpoint .

2.2. Task Clustering Modeling

The main purpose of task clustering is to make tasks with close connection in the same block so as to form an independent project blocks, while the ones with loose connection will be in different blocks. In doing so, it is easy to realize optimization in engineering system.

At present, there are many methods used to realize task clustering such as similar coefficient method, ranking method, and path search method. However, it is not satisfactory when all these methods are applied to clustering of DSM, especially that group size is unknown in advance. Due to this reason, other researchers developed more efficient methods, where one of the typical ones is suggested by Thebeau [21]. He combined bid and evaluation to realize task clustering. Bid value measures the dependency degree between the element and the group which is proportional to dependent density and is used to determine which group the selected element belongs to. It can be defined as follows where is the number of groups. is the bid value of group for the selected element. is the dependency gross in group . is the number that group contains. is a weight exponent. is a bid exponent of groups.

Another index presented by Thebeau [21] is coordination cost which is the objective function of the bid and evaluation method. It comprehensively describes dependencies among tasks as well as the size of the corresponding group. It is formulated as where is the coordination cost inside groups (for instance, tasks and belong to the same group). is the coordination cost outside groups (for instance, tasks and belong to different groups, resp.). and denote relationships between task and . is the number of tasks that group contains. is the total number of tasks that the whole DSM matrix contains. is a weight exponent of bid groups.

2.3. A Method for Solving Task Clustering

In general, there exist possible combination schemes for the DSM clustering containing tasks and groups. According to the characteristics of the problem, we adopt a heuristic approach in this research, and the concrete steps are as follows.(1)Initialize the problem, where every task is taken as an independent group.(2)Randomly select one of tasks and calculate each of the other groups’ bid values for it. Subsequently, distribute it to the group which has the highest bid value after repeated calculation.(3)Delete empty groups, subgroups, and same ones.(4)Choose a new task to repeat the process mentioned above until all tasks has been traversed and the system reaches a stable state.

3. Multiproject Tasks Scheduling

In the above sections, tasks that have more information relationships will be converged to the same group. Nevertheless, how to arrange these tasks is another important problem to realize engineering system optimization. As a result, a multiproject tasks scheduling problem is proposed in this section. Generally, scheduling process involves allocation of the given resource to projects to determine the start and completion time of the detailed tasks. The allocation of scarce resources then becomes a major objective of the problem, and several compromises have to be made to solve the problem to the desired level of near-optimality [22]. Usually, this process includes two steps: first, build up the model of the multiproject scheduling problem; second, solve the problem using intelligent algorithms.

3.1. The Model of Multiproject Scheduling Problem

The problem consists of the number of projects , and the following assumptions are taken in to consideration.(1)Task cannot start unless all of its predecessors have been completed.(2)There are only renewable resources, and nonrenewable ones are not considered.(3)Task preemption is not allowable.(4)In a multiproject environment, the delay of any project will lead to iterations and alterations of related follow-up work, so we can assume that the objective is to minimize the completion time of all projects but not a certain project.

Based on these assumptions, the problem and the conceptual model will be described as follows. The considered problem consists of parallel projects, each project , being composed of tasks , . The tasks are interrelated by two kinds of constrains. One is precedence constraints, the other is resource constraints. While being processed, task in project requires units of renewable resource type during each period of its non-preemptive duration . Each resource type has a fix and limit available amount . In addition, the optimal objective of the problem is to make the makespan of all projects shorter through finding out feasible starting time of tasks and allocation of resources. Therefore, the model of the problem can be described as follows: where denotes the weight of the project and represents the number of projects. is the completion time of project . is the resource unconstrained critical path length of project . is a task set being executed at time . is the precedence-task set of task . The objective function (5) seeks to minimize the performance measure. Obviously, minimizing this criterion is equivalent to minimizing the mean resource-constrained completion time of the projects. A constraint (6) imposes the precedence relations between tasks, and a constraint (7) limits the resource demand imposed by the tasks being processed time to the available capacity. It means the number of available resources will change according to the completion and starting time of tasks.

However, during the scheduling of multiprojects, some tasks will not be performed concurrently due to resource constraints and precedence ones which will also increase the difficulty to solve this problem. In this circumstance, precedence constraints among tasks should be satisfied firstly so as to determine an eligible task set. And then, resource conflicts possibly occurred in this eligible set should be identified in order to decide the task priority values, issued from the select priority rule. Therefore, combined DSM, the following will give a simplifying approach of precedence constraints and the task priority values, respectively: (1) set up DSM clustering model of multiproject based on Section 2; (2) construct an eligible task set, , a task set being executed, , and a task set completed, , where can be generated through DSM. That is to say, if the task satisfies from a row, we can obtain or . Similarly, we can determine whether task belongs to . In doing so, can be determined; (3) identify resource conflicts between task and . If exists, perform tasks according to priority value; if not, perform them concurrently and add those tasks which are to be scheduled to ; (4) if tasks , have been fulfilled, update , , and . Determine the next task set which will possibly cause resource conflicts and repeat the process till all of the tasks have been fulfilled.

3.2. Artificial Bee Colony Based Multiproject Scheduling Problem

In this section, the basic artificial bee colony algorithm based on the foraging behavior of honeybees is introduced firstly. Subsequently, the discrete artificial bee colony algorithm used for solving the multiproject scheduling problem is proposed.

3.2.1. Honeybee Modeling

In the basic ABC algorithm, the colony of artificial bees contains three groups of bees: employed bees, onlookers, and scouts. Employed bees determine a food source within the neighborhood of the food source in their memory and share their information with onlookers within the hive, while onlookers select one of the food sources according to this information. In addition, a bee carrying out random search is called a scout. In ABC algorithm, first half of the colony consists of the employed bees and the rest half includes the onlookers. There is only one employed bee corresponding to one food source. That is to say, the number of employed bees is equal to the number of food sources around the hive. The position of a food source denotes a possible solution of the optimization problem and the nectar amount of a food source corresponds to the quality (fitness) of the associated solution.

The initial population of solutions is filled with number of randomly generated -dimensional real-valued vectors (i.e., food sources). Each food source is generated as follows: where . , and are the lower and upper bounds for the dimension , respectively. These food sources are assigned randomly to SN number of employed bees and their corresponding fitness is evaluated.

In order to produce a candidate food position from the old one, the ABC used the following equation: where and are randomly chosen indexes. Although is determined randomly, it has to be different from . is a random number in the range . Once is obtained, it will be evaluated and compared to . If the fitness of is equal to or better than that of , will replace and become a new member of the population; otherwise, is retained.

After all employed bees complete their searches, onlookers evaluate the nectar information taken from all employed bees and choose one of food source sites with probabilities related to its nectar amount. In basic ABC, roulette wheel selection scheme in which each slice is proportional in size to the fitness value is employed as follows: where is the fitness value of solution . Obviously, the higher the is, the more probability that the th food source is selected.

If a position cannot be improved further through a predetermined number of cycles, then that food source is assumed to be abandoned. The scouts can accidentally discover rich, entirely unknown food sources according to (8). The value of predetermined number of cycles is called “limit” for abandoning a food source which is an important control parameter of ABC algorithm.

There are three control parameters used in the basic ABC: the number of the food sources which is equal to the number of employed bees (SN), the value of limit, and the maximum cycle number (MEN).

3.2.2. Discrete Artificial Bee Colony for Solving Multiproject Scheduling Problem

The multiproject scheduling problem is a typical NP-hard problem and traditional exact algorithms may cause large computation time and is very difficult to find out the optimal solution. With the last decades, various kinds of optimization algorithms based on swarm intelligence have been designed and applied to function-optimization, task-allocation, and other problems [23]. Some of the popular approaches are ant colony optimization (ACO) [24, 25], particle swarm optimization (PSO) [26, 27], artificial immune systems (AISs) [28], and so on, where Karaboga [29] has described a bee swarm algorithm called artificial bee colony (ABC) algorithm based on the foraging behavior of honeybees. They have compared the performance of the ABC algorithm with those of other well-known modern heuristic algorithms for unconstrained optimization problems, and the results have shown that the ABC algorithm is superior to other ones. However, the basic ABC algorithm was designed to solve continuous optimization problems. In order to make it applicable for solving scheduling problem, a discrete ABC algorithm is proposed in this section, and the concrete steps are as follows.

(1)  Solution Representation. According to the characteristics of the problem, a direct problem representation is used. Complete information of a schedule for the problem consists of a task priority rule, tasks and their corresponding modes.

(2)  Population Initialization. To guarantee an initial population with certain quality and diversity, a portion of food sources are generated by using some priority rules while the others are produced randomly. For the project scheduling problem, the smallest slack time (SST), the earliest due date (EDD), and the smallest execution time (SET) rules are commonly adopted to yield the initial schedule. Therefore, this work applies these rules to produce three different solutions. For example, the EDD rule sorts the activities according to their ascending due dates such that and the same to SST and SET.

(3)  Employed Bee Phase. The employed bees generate food sources in the neighborhood of their position in the ABC algorithm. In this work, three operations including SWAP, INSERT, and INVERSE are used to produce neighboring solutions, where the SWAP operator is defined by interchanging two activities in different positions, while the INSERT one is defined by removing an activity from its original position and inserts it into a new position, and the last one, INVERSE, generates a neighbor by inversing the sequence between two activities in different positions. Note that if the neighboring solutions do not satisfy preference constraints, the old one should be retained. Furthermore, in order to enrich searching region and diversify the population, five related approaches based on SWAP, INSERT, or INVERSE operators are adopted to produce neighboring solutions which are shown as follows:(1)performing one SWAP operator to a sequence;(2)performing two SWAP operators to a sequence;(3)performing one INSERT operator to a sequence;(4)performing two INSERT operators to a sequence;(5)performing the INVERSE operator to a sequence.

(4)  Onlooker Bee Phase. In the basic ABC algorithm, an onlooker bee chooses a food source depending on the probability value associated with that food source. In other words, the onlooker bee chooses one of the food sources after making a comparison among the food sources around the current position which is similar to “roulette wheel selection” in genetic algorithm (GA). In this work, we also retain this approach to make the algorithm converge fast.

(5)  Scout Bee Phase. In the basic ABC algorithm, a scout produces a food source randomly. This will decrease the search efficacy, since the best food source in the population often carried better information than others. As a result, in this paper, the scout produces a food source using several SWAP, INSERT, and INVERSE operators to the best food source in the population. In addition, to avoid the algorithm be trapped into a local optimum, this process should be repeated several times.

4. Simulation Experiments

In this section, a numerical example derived from an engineering design of a chemical processing system is utilized so as to help to understand the proposed dynamic intelligent decision approach firstly. After that, further analysis and discussions about the effect of task clustering analysis on scheduling schemes as well as the performance of ABC algorithm for solving multiproject scheduling problem are also given.

4.1. Numerical Experiment

To demonstrate the effectiveness of our proposed approach, we use 20-task binary DSM matrix derived from Su et al. [20]. In this example, an engineering design of a chemical processing system has 20 tasks, and detailed task information is listed in Table 1. The whole design process usually involves several disciplines. We aggregated these disciplines into four types: systems engineers , software engineers , hardware engineers , and supporting engineers . Hardware engineers include chemical engineers as well as electrical engineers and supporting engineers consist of the disciplines that act as a supporting role during the design process; for example, manufacturing, logistics, tests, maintainability, reliability, and so forth. Assume that the ready-time is 0. The resource availability is set to .

In the first stage, according to dependency modelling technology mentioned in Section 2, the DSM model is set up, shown in Figure 3(a), where the empty elements represent no relationships between two tasks and number “1” represents input or output information among tasks. For example, task 1 requires information from tasks 13 and 15 when it executes. Additionally, task 1 must provide information to tasks 4, 5, 10, 14, 16, and 18, otherwise they can not be start. Nevertheless, Figure 3(a) only denotes the ‘existence’ attributes of a dependency between the different tasks. In order to further reveal their matrix structure, it is necessary to quantify dependencies among tasks.

Subsequently, using a two-way comparison scheme, we can transform the binary DSM into the numerical one. Here, two criterions named task evolution (represented by EC) and task sensitivity (represented by SC) are adopted to perform pairwise comparisons, where the former means the information transfer rate to the element from , and the latter means the effect degree of information change to the element from element . For example, if the criterion for the comparison matrix is task evolution, the comparison scale 9 for task 4 compared to task 18 indicates task 4 is evolving much faster than task 18 when they receive information from task 1. In addition, the comparison scale 1/3 for task 5 comparing to task 4 represents the evolution degree of task 5 is somewhat slower than that of task 4 when they receive information from task 1. In doing so, we can obtain the comparison matrix of evolution shown as follows: In the same way, if the criterion for the comparison matrix is task sensitivity, we can get the comparison matrix of sensitivity shown as follows:

The detailed computation process including every pair of tasks compared in DSM are not given due to the length limitation of this paper, and the final computation result is shown in Figure 3(b) after normalized treatment mentioned in Section 2.1.2.

After dependency modeling based on DSM, a clustering algorithm based on coordination cost mentioned in Section 2.2 is used. Firstly, initialize the problem, where every task is taken as an independent group. Then, select task randomly and calculate each group’s bid value for it, respectively, and assign it to the group with the highest bid value through loop computation until the total coordination cost is no longer reduced. Finally, update groups and delete empty ones. When all the tasks have been traversed and the system arrives to a stable state, this procedure is over. The detailed task clustering algorithm procedure is shown in Figure 4.

The final task clustering result is displayed in Figure 5, and the corresponding changing curve of the coordination cost with iteration number is shown in Figure 6. We can see from Figure 5, that the whole design process can be divided into four blocks, where block 1 contains 3 tasks such as 3, 7, and 12, and all of them can be executed without input information from others; block 2 consists of tasks 2, 9, 13, and 15, and they must receive information from block 1; block 3 includes task 1, 4, 5, 8, 10, 11, 17, and 18, and all the tasks must depend on information from blocks 1 and 2; block 4 is comprised of tasks 6, 14, 16, 19, and 20, where tasks 16, 19, and 20 only receive input information from other tasks but provide nothing to others. Furthermore, the cost curve in Figure 6 reveals that the coordination cost converges to a minimum which indicates the whole system has the minimum complexity. Moreover, according to the clustering analysis, each block shown in Figure 5 can be defined as an independent subproject. It means that the original large-scale project is decomposed into four simple, manageable, and small subprojects. The reason that the subprojects are defined is because tasks belonging to the same block have higher correlation degrees, but ones belonging to different blocks have lower correlation degrees. In doing so, a suitable decomposition scheme for a large-scale project is obtained so as to reduce the difficulty of task planning problem using the ABC algorithm below in the second stage.

In the second stage, according to the clustering result, the task planning problem can be transformed into multiproject scheduling one. Considering the objective function of the problem is to minimize the delay time of all projects, we must define the shortest makespans of all projects in advance, and critical path method (CPM) is used to obtain their values of the shortest makespans (i.e., 5, 18, 28, and 13). Moreover, so as to simplify the mathematic model, we set all projects that have the same weight coefficients. That is to say . Subsequently, a discrete ABC algorithm is used to solve this problem mentioned in Section 3.2, and the related parameters are set as follows: , , . The planning result is shown in Figure 7, and the delay time of the whole process is 41 .

We can see from Figure 7, tasks in the same project will be executed more tightly and hardly be affected by other tasks from different projects. This is because after clustering operation, tasks with tight dependencies belong to the same project. In addition, the task clustering analysis will help to reduce the search space and further improve the efficiency of solution in the second stage.

4.2. Analysis and Discussions

In this section, the effect of task clustering analysis on scheduling schemes as well as the performance of ABC algorithm for solving multiproject scheduling problem is discussed, and extensive experiments to analyze theses have been illustrated. The project test problems are generated by the project generator ProGen developed by Kolisch et al. [30] and the number of tasks in a project is 30, 60, 90, 120, 150, 180, and 210, respectively. For each problem type, we generated 20 instances. Each task can use up to four resources. To generate the test instances the single project problems were randomly selected network complexity and resource factor, where network complexity means that the network depends on interdependent relations among different tasks. Table 2 shows the related information about projects used for the problem.

4.2.1. The Effect of Task Clustering Analysis on Scheduling Schemes

In order to analyze the effect of task clustering analysis on scheduling schemes, two indexes are introduced: one is the average deviation of project delay time, and the other is the computation time, where the former is used to measure robustness of search algorithms and the latter to compare the complexity of computation. Table 3 shows results obtained on the problem subsets. We can see from it that after task clustering operation, the computation complexity of scheduling problems reduces obviously . The reason is that a large-scale project is decomposed into several simple, manageable, and small sub-projects using a task clustering approach proposed in this work. In addition, after task clustering operation, average deviation of project delay time is also reduced when the project scale is larger. It means that the task clustering analysis will help to decrease the search space and further improve efficiency of algorithms in the second decision stage.

4.2.2. Performance of ABC Algorithm for Solving Multiproject Scheduling Problem

According to task clustering result obtained in the first decision stage, the ABC algorithm is used to solve multiproject scheduling problem in the second one. In order to get the most out of the ABC algorithm, parameter setting mentioned in Section 3.2 was implemented on a set of produced multiproject problems after task clustering operation. Table 4 shows results obtained by the various algorithms on the problem subsets. The proposed algorithm is compared with other approaches, including simulated annealing (SA), ant colony optimization (ACO), and artificial immune system (AIS), in view of the average project delay (APD), lower total makespan (LTM) of multiproject, and processing time of CPU (CPUt). In this table, the first column indicates the problem subset. The second column shows the number of schedules which is used as the stopping criterion. The third to the fifth columns represent the averages of APD, LTM, and CPUt from various algorithms. From Table 4, it is seen that out of 7 subsets, the ABC algorithm is superior to others when the problem scale is larger. The average project delays and lower total makespan obtained by the four approaches for all the problems are also compared which show that our approach is obviously better than AIS and SA for all the problems but a little worse than ACO when the problem scale is small. In addition, when the number of schedules increases, our approach still searches for the optimum schedule scheme but others are hardly influenced by it. This is because our approach introduces operations including scout bee searching process which is helpful to maintain the diversity of individuals. However, our algorithm may spend more computation time. There are two reasons causing this result: one is that five related approaches based on SWAP, INSERT, or INVERSE operators mentioned in Section 3.2.2 may occupy more time in order to find out the local optimum; the other is that the new individuals generated by scout bees are introduced to replace the worse ones so as to open up a new searching field.

5. Conclusions

In this study, we have presented a two-stage intelligent decision approach to dependency modelling of project tasks in complex engineering system optimization. Both the dependencies among tasks and sequence of project tasks are clearly identified. In the first stage, in order to optimize the complex engineering system, we further investigate task evolution degree and sensitivity degree based on a two-way comparison scheme. In addition, we have also introduced DSM model that systematically quantifies the strength between related tasks and decomposes large interdependent task groups into smaller and manageable sub-projects. Subsequently, according to decomposition results obtained from the first stage, the discrete artificial bee colony (ABC) algorithm is developed for project task planning in the second stage, where DSM has also adopted to simplify the mathematic model of the task planning. In doing so, a better sequence of project tasks is expected because of less communication links and simpler information flows among tasks in different sub-projects. The major contributions of the research are as follows: (a) the integrated model of complex engineering system optimization is developed. This has led to a visible dependency structure and a simpler task sequence through systematic procedures; (b) the task dependencies in the complex system are clearly identified by quantitative measures. It is very important to offer a great research potential in understanding and analyzing the implications of connectivity on different aspects of complex system performance; (c) the decomposition of a large number of task groups and the planning of projects tasks lay a sound foundation for engineering optimization. In addition, each sub-project obtained from decomposition has been limited to a manageable size so that tasks in the same sub-project have more communication links. The results will serve as the task requirements for efficiently scheduling project tasks.

In this work, we focus on dependency modeling of project tasks in complex engineering system optimization. For sound project task management, it is necessary to consider influences from external factors. Due to this reason, our future research extension will focus on analysing the effects from customers’ requirements on engineering optimization process. After that, how to apply this two-stage intelligent decision approach to other optimization fields also needs further study.

Acknowledgments

This research is supported by National Natural Science Foundation of China (Grant nos. 71071141, 71001088, and 71171089), the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant nos. 200804870070 and 20103326120001), and Humanity and Sociology Foundation of Ministry of Education of China (Grant no. 11YJC630019).