Abstract

This paper gives an overview of three simulation studies in dynamic project scheduling integrating baseline scheduling with risk analysis and project control. This integration is known in the literature as dynamic scheduling. An integrated project control method is presented using a project control simulation approach that combines the three topics into a single decision support system. The method makes use of Monte Carlo simulations and connects schedule risk analysis (SRA) with earned value management (EVM). A corrective action mechanism is added to the simulation model to measure the efficiency of two alternative project control methods. At the end of the paper, a summary of recent and state-of-the-art results is given, and directions for future research based on a new research study are presented.

1. Introduction

Completing a project on time and within budget is not an easy task. Monitoring and controlling projects consists of processes to observe project progress in such a way that potential problems can be identified in a timely manner and corrective actions can be taken, when necessary, to bring endangered projects back on track. The key benefit is that project performance is observed and measured regularly to identify variances from the project baseline schedule. Therefore, monitoring the progress and performance of projects in progress using integrated project control systems requires a set of tools and techniques that should ideally be integrated into a single decision support system. In this paper, such a system is used in a simulation study using the principles of dynamic scheduling [13].

The term dynamic scheduling is used to refer to an integrative project control approach using three main dimensions which can be briefly outlined along the following lines. (i)Baseline scheduling is necessary to construct a timetable that provides a start and finish date for each project activity, taking activity relations, resource constraints, and other project characteristics into account and aiming to reach a certain scheduling objective. (ii)Risk analysis is crucial to analyse the strengths and weaknesses of the project baseline schedule in order to obtain information about the schedule sensitivity and the impact of potential changes that undoubtedly occur during project progress. (iii)Project control is essential to measure the (time and cost) performance of a project during its progress and to use the information obtained during the scheduling and risk analysis steps to monitor and update the project and to take corrective actions in case of problems.

The contribution and scope of this paper is fourfold. First, the paper aims at introducing the reader into the three dimensions of the integrated dynamic scheduling approach. Secondly, a project control simulation approach that can be used for testing existing and novel project scheduling and control techniques is presented. The study is based on a Monte Carlo simulation approach using the three dimensions of dynamic scheduling. Third, the project control simulation approach is illustrated using three simulation experiments published in the literature. This paper provides a summary of these three simulation studies and gives general results. Finally, a summary will be given, and directions for future research avenues will be highlighted.

For a recent overview of the integration between baseline scheduling, risk analysis, and project control, the reader is referred to the book written by Vanhoucke [2]. In the following sections, many of the topics discussed in this paper will be used as illustrative example studies. The outline of this paper is as follows. In Section 2, a general framework for project control simulation studies is presented, and the four steps are briefly discussed. Section 3 gives an overview of three simulation studies published in the literature that measure the importance and relevance of risk analysis and project control in relation with the baseline schedule. Moreover, this section also reviews the main techniques to generate fictitious project data. In Section 4, some recommendations for future research topics will be discussed. Section 5 draws general conclusions.

2. Monte Carlo Simulation

In this section, a general project control simulation algorithm that makes use of Monte Carlo simulation runs and aims at combining the three dimensions of dynamic scheduling into a single research approach is presented. The pseudocode of the algorithm is given below, and details are displayed along the following subsections.

Algorithm 1 (Project Control Simulation). Step  1. Construct Baseline Schedule.Step  2. Define Activity Distributions.Step  3. Run Simulation and Measure.Step  4. Report Output Metrics.

2.1. Baseline Scheduling

The project baseline schedule plays a central role in any project control simulation study since it acts as a point of reference for all calculations done during the simulation runs of Step 3. Constructing a baseline schedule is necessary to have an idea about the expected time and cost of a project. Indeed, by determining an activity timetable, a prediction can be made about the expected time and cost of each individual activity and the complete project. This timetable will be used throughout all simulation studies as the point of reference from which every deviation will be monitored and saved. These deviations will then be used to calculate the output metrics of Step 4 and to draw general conclusions as will be illustrated in later sections of this paper.

Project baseline scheduling can be defined as a mathematical approach to determine start and finish times for all activities of the project, taking into account precedence relations between these activities as well as a limited availability of resources, while optimising a certain project objective, such as lead-time minimisation, cash-flow optimisation, levelling of resource use, and many more.

The early research endeavours on baseline scheduling stem from the late 1950s resulting in the two well-known scheduling techniques known as the critical path method (CPM) and the program evaluation and review technique (PERT) [57]. These methods make use of activity networks with precedence relations between the activities and the primary project objective to optimise the minimisation of the total project time. Due to the limited computer power at this time, incorporating resource constraints has been largely ignored. However, these methods are still widely recognised as important project management tools and techniques and are often used as basic tools in more advanced baseline scheduling methods. Since the development of the PERT/CPM methods, a substantial amount of research has been carried out covering various areas of project baseline scheduling. The most important extensions of these basic scheduling methods are the incorporation of resource constraints, the extension to other scheduling objectives, and the development of new and more powerful solution methods to construct these baseline schedules, as summarised along the following lines.

(i) Adding Resources. Resource-constrained project scheduling is a widely discussed project management topic which has roots in and relevance for both academic and practical oriented environments. Due to its inherent problem complexity, it has been the subject of numerous research projects leading to a wide and diverse set of procedures and algorithms to construct resource feasible project schedules. Thanks to its practical relevance, many of the research results have found their way into practical project management applications. Somewhat more than a decade ago, this overwhelming amount of extensions has inspired authors to bring structure in the chaos by writing overview papers [812], summary books [2, 13] and by developing two different classification schemes [12, 14] on resource-constrained project scheduling.

(ii) Changing Objectives. The PERT/CPM methods mainly focused on constructing baseline schedules aiming at minimising the total lead-time of the project. However, many other scheduling objectives can be taken into account and the choice of an objective to optimise can vary between projects, sectors, countries, and so forth. Some of these scheduling objectives take the cost of resources into account. The so-called resource availability cost project (RACP) aims at minimising the total cost of the resource availability within a predefined project deadline, and references can be found in Demeulemeester [15], Drexl and Kimms [16], Hsu and Kim [17], Shadrokh and Kianfar [18], Yamashita et al. [19], Drexl and Kimms [16], Gather et al. [20], and Shahsavar et al. [21]. The resource leveling project (RLP) aims at the construction of a precedence and resource feasible schedule within a predefined deadline with a resource use that is as level as possible within the project horizon. Various procedures have been described in papers written by Bandelloni et al. [22], Gather et al. [20], Neumann and Zimmermann [23, 24], Coughlan et al. [25], Shahsavar et al. [21], and Gather et al. [26]. The resource-constrained project with work continuity constraints (RCP-WC) takes the so-called work continuity constraints [27] into account during the construction of a project schedule. The objective is to minimise the total idle time of bottleneck resources used in the project, and it has been used by Vanhoucke [28, 29]. The resource renting problem (RRP) aims at minimising the total resource cost, consisting of time-dependent, and time-independent costs. Time-dependent costs are encountered each time unit a renewable resource is in the resource set while time-independent costs are costs made every time a resource is added to the existing resource set. References to solution procedures for this problem can be found at Nübel [30], Ballestín [31, 32], and Vandenheede and Vanhoucke [33]. Other scheduling objectives take the cost of activities into account to determine the optimal baseline schedule. The resource-constrained project with discounted cash fows (RCP-DCF) optimises the timing of cash flows in projects by maximising the net present value. The basic idea boils down to shifting activities with a negative cash flow further in time while positive cash flow activities should be scheduled as soon as possible, respecting the precedence relations and limited resource availabilities. Algorithms have been developed by Smith-Daniels and Aquilano [34], Elmaghraby and Herroelen [35], Yang et al. [36], Sepil [37], Yang et al. [38], Baroum and Patterson [39], Icmeli and Erengüç [40], Pinder and Marucheck [41], Shtub and Etgar [42], Özdamar et al. [43], Etgar et al. [44], Etgar and Shtub [45], Goto et al. [46], Neumann and Zimmermann [24], Kimms [47], Schwindt and Zimmermann [48], Vanhoucke et al. [49, 50], Selle and Zimmermann [51], Vanhoucke et al. [52], and Vanhoucke [53, 54]. An overview is given by Mika et al. [55]. Vanhoucke and Demeulemeester [56] have illustrated the use and relevance of net present value optimisation in project scheduling for a water company in Flanders, Belgium. When activities have a preferred time slot to start and/or end and penalties have been defined to avoid that these activities start/end earlier or later than this preferred time slot, the problem is known as the Resource-Constrained Project with weighted earliness/tardiness (RCP-WET) problem. This baseline scheduling problem is inspired by the just-in-time philosophy from production environments and can be used in a wide variety of practical settings. Algorithmic procedures have been developed by Schwindt [57] and Vanhoucke et al. [58]. The Resource-Constrained Project with quality time slots (RCP-QTSs) is an extension of this RCP-WET problem. Quality-dependent time slots can be considered as an extension of the RCP-WET scheduling objective. In this scheduling objective, multiple time slots are defined rather than a single preferred start time and earliness/tardiness penalties must be paid when activities are scheduled outside one of these time slots. To the best of our knowledge, the problem has only been studied by Vanhoucke [59].

(iii) New Solution Methods. The vast majority of solution methods to construct resource-constrained baseline schedules can be classified in two categories. The exact procedures aim at finding the best possible solution for the scheduling problem type and are therefore often restricted to small projects under strict assumptions. This class of optimisation procedures is widely available in the literature but can often not be used in real settings due to the high computation times needed to solve the problems. Heuristic procedures aim at finding good, but not guaranteed to be optimal schedules for more realistic projects (i.e., under different assumptions and for larger sizes) in a reasonable (computational) time. Although these procedures do not guarantee an optimal solution for the project, they can be easily embedded in any scheduling software tool due to their simplicity and generality to a broad range of different projects. An extensive discussion of the different algorithms is not within the scope of this paper. For a review of exact problem formulations, the reader is referred to Demeulemeester and Herroelen [13]. The use of heuristic procedures consists of single pass and multipass algorithms as well as the use of metaheuristics and their extensions, and the number of published papers has exploded through the last years. An experimental investigation of heuristic search methods to construct a project schedule with resources can be found in Kolisch and Hartmann [60].

Today, project baseline scheduling research continues to grow in the variety of its theoretical models, in its magnitude, and in its applications. Despite this ever growing amount of research on project scheduling, it has been shown in the literature that there is a wide gap between the project management discipline and the research on project management, as illustrated by Delisle and Olson [61], among many others. However, research efforts of the recent years show a shift towards more realistic extensions trying to add real needs to the state-of-the-art algorithms and procedures. Quite recently, a paper has been written in which a survey of variants and extensions of the resource-constrained project scheduling problem have been described [62]. This paper clearly illustrates that the list of extensions to the basic resource-constrained project scheduling problem is long and could possibly lead to continuous improvements in the realism of the state-of-the-art literature, bringing researchers closer to project management professionals. Moreover, while the focus of decennia of research was mainly on the static development of algorithms to deal with the complex baseline scheduling problems, the recent research activities gradually started to focus on the development of dynamic scheduling tools that make use of the baseline schedule as a prediction for future project progress in which monitoring and controlling the project performance relative to the baseline schedule should lead to warning signals when the project tends to move into the danger zone [63].

2.2. Activity Distributions

The construction of a project baseline schedule discussed in the previous section relies on activity time and cost estimates as well as on estimates of time lags for precedence relations and use of resources assigned to these activities. However, the constructed baseline schedule assumes that these deterministic estimates are known with certainty. Reality, however, is flavoured with uncertainty, which renders the PERT/CPM methods and their resource-constrained extensions often inapplicable to many real life projects. Consequently, despite its relevance in practice, the PERT/CPM approach often leads to underestimating the total project duration and costs (see e.g., Klingel [65], Schonberger [66], Gutierrez and Kouvelis [67], and many others), which obviously results in time and cost overruns. This occurs for the following reasons.(i)The activity durations in the critical path method are single point estimates that do not adequately address the uncertainty inherent to activities. The PERT method extends this to a three point estimate, but still relies on a strict predefined way of analysing the critical path. (ii)Estimates about time and cost are predictions for the future, and human beings often tend to be optimistic about it or, on the contrary, often add some reserve safety to protect themselves against unexpected events. (iii)The topological structure of a network often implies extra risk at points where parallel activities merge into a single successor activity.

Uncertainty in the activity time and cost estimates or in the presence of project activities, uncertainty in the time lags of precedence relations or in the network structure, and even uncertainty in the allocation and costs of resources assigned to the activities can be easily modelled by defining distributions on the unknown parameters. These stochastic values must be generated from predefined distributions that ideally reflect the real uncertainty in the estimates. The use of activity distributions on activity durations has been investigated in project management research since the early developments of PERT/CPM. From the very beginning, project scheduling models have defined uncertainty in the activity durations by beta distributions. This is mainly due to the fact that the PERT technique has initially used these distributions [68]. Extensions to generalised beta distributions are also recommended and used in the literature (see e.g., AbouRizk et al. [69]). However, since these generalised beta distribution parameters are not always easily understood or estimated, variation in activity durations are often simulated using the much more simple triangular distributions [70] where practitioners often base an initial input model on subjective estimates for the minimum value, the most likely value and the maximum value of the distribution of the activity duration. Although it has been mentioned in the literature that the triangular distribution can be used as a proxy for the beta distribution in risk analysis (see e.g., Johnson [71]), its arbitrary use in case no empirical data is available should be taken with care (see e.g., Kuhl et al. [72]). In the simulation studies of [64, 73], the generalised beta distribution has been used to model activity duration variation, but its parameters have been approximated using the approximation rules of Kuhl et al. [72]. Other authors have used other distributions or approximations, resulting in a variety of ways to model activity duration variation. This is also mentioned in a quite recent paper written by Trietsch et al. [74], where the authors argue that the choice of a probability distribution seems to be driven by convenience rather than by empirical evidence. Previous research studies have revealed that the choice of distributions to model empirical data should reflect the properties of the data. As an example, AbouRizk et al. [69] defend the importance of appropriate input models and state that their inappropriate use is suspect and should be dealt with carefully. To that purpose, Trietsch et al. [74] advocate the use of lognormal functions for modelling activity times, based on theoretical arguments and empirical evidence. An overview of stochastic modelling in project scheduling and the proper use of activity times distributions would lead us too far from the dynamic scheduling and project control simulation topic of this paper. Therefore, the reader is referred to the recent paper of Trietsch et al. [74] as an ideal starting point on the use of activity time distributions to be used in the stochastic project scheduling literature.

In the remainder of this paper, activity distributions will only be used to model variation in the activity time estimates, and hence, no variation is modelled on the cost estimates or on the estimates about precedence relation time lags or resource assignments. This also means that all experiments and corresponding results discussed in the further sections will only hold for controlling the time performance of the projects, and the results can therefore not be generalised to cost controlling.

2.3. Run Simulation and Measure

In the third step, the project is subject to Monte Carlo simulations to imitate fictitious project progress. The literature on using Monte Carlo simulations to generate activity duration uncertainty in a project network is rich and widespread and is praised as well as criticised throughout various research papers. In these simulation models, activity duration variation is generated using often subjective probability distributions without precise accuracy in practical applications (see previous section). However, the inability of the simulation runs to incorporate the management focus on a corrective action decision making process to bring late running projects back on track has led to the crumbling credibility of these techniques. Despite the criticism, practitioners as well as academics have used project network models within a general simulation framework to enable the generation of activity duration and cost uncertainties. For a discussion on the (dis)advantages of project network simulation, the reader is referred to Williams [75].

Despite the shortcomings and criticism of using Monte Carlo simulations in project management, it is a powerful and easy to use tool to analyse the behaviour of projects in progress and to measure the impact of changes in the initial estimates on the project objectives. Indeed, during each run of the simulation, a value for the activity duration is generated from the predefined distribution, leading to differences between the planned durations and the simulated values. These differences between the baseline schedule key metrics and their corresponding simulated values must be measured during each simulation step. Thanks to the enormous computer power and memory, many deviations can be measured and saved during each simulation run, such as differences in the activity criticality, delays in the total project duration, variability in the control performance metrics, and corresponding forecasts. The specific choice of what type of measurement points will be saved during each simulation run depends on the specific study. In the three example simulation studies of Section 3, it will be shown that the measurement points saved during each simulation run differ along the scope of each simulation study. Afterwards, upon the finish of the simulation runs, these measurement points are analysed, and some output metrics are calculated, as briefly discussed in Step 4.

2.4. Report Output Metrics

The huge amount of data that has been saved during the simulation runs will now be analysed and summarised in key output metrics. These key output metrics differ from study to study and depend on the definition, scope, and target of the simulation study. In the current paper, the simulations are used to perform a dynamic scheduling and integrated project control study. It will be shown in the next section that the output metrics depend on the scope of the simulation study and the intended outcome of the research. An important aspect of the output metrics is that they need interpretation and understanding such that they can lead to drawing conclusions that add insight to enhance the project control approach.

3. Simulation Studies

In this section, three illustrative project control simulation studies will be briefly presented, and references to interesting publications will be given for more details. For each simulation study, the measurement points and output metrics will be discussed in line with the scope of the study. In Section 3.2, a schedule risk analysis study will be presented to validate the power and reliability of risk metrics that measure the sensitivity of the activity durations. Section 3.3 gives an overview of an accuracy simulation study using earned value management and earned schedule predictors by using three methods from the literature. Finally, in Section 3.4, an action oriented project control study is discussed in which two alternative project control methods are compared and benchmarked. All simulation studies are carried out on a big set of fictitious projects generated under a controlled design. This generation process as well as the metrics to control the structure and design of the data is discussed in Section 3.1.

3.1. Test Data

In this section, the generation process to construct a set of project networks that differ from each other in terms of their topological structure is described in detail. Rather than drawing conclusions for a (limited) set of real life projects, the aim is to generate a large set of project networks that spans the full range of complexity [76]. This guarantees a very large and diverse set of generated networks that can and might occur in practice such that the results of the simulation studies can be generalised. The generation process relies on the project network generator developed by Vanhoucke et al. [77] to generate activity-on-the-node project networks where the set of nodes represents network activities and the set of arcs represents the technological precedence relations between the activities. These authors have proposed a network generator that allows generating networks with a controlled topological structure. They have proven that their generator is able to generate a set of very diverse networks that differ substantially from each other from a topological structure point of view. Moreover, it has been shown in the literature that the structure of a network heavily influences the constructed schedule [78], the risk for delays [79], the criticality of a network [80], or the computational effort an algorithm needs to schedule a project [76]. In the simulation experiments, the design and structure of the generated networks are varied and controlled, resulting in 4,100 diverse networks with 30 activities. For more information about the specific topological structures and the generation process, the reader is referred to Vanhoucke et al. [77]. The constructed data set can be downloaded from http://www.or-as.be/measuringtime.

Various research papers dealing with network generators for project scheduling problems have been published throughout the academic literature. Demeulemeester et al. [81] have developed a random generator for activity-on-the-arc (AoA) networks. These networks are so-called strongly random since they can be generated at random from the space of all feasible networks with a specified number of nodes and arcs. Besides the number of nodes and the number of arcs, no other characteristics can be specified for describing the network topology. Kolisch et al. [82] describe ProGen, a network generator for activity-on-the-node (AoN) networks which takes into account network topology as well as resource-related characteristics. Schwindt [83] extended ProGen to ProGen/Max which can handle three different types of resource-constrained project scheduling problems with minimal and maximal time lags. Agrawal et al. [84] recognize the importance of the complexity index as a measure of network complexity and have developed an activity-on-the-arc network generator DAGEN for which this complexity measure can be set in advance. Tavares [85] has presented a new generator RiskNet based on the concept of the progressive level by using six morphological indicators (see later in this section). Drexl et al. [86] presented a project network generator ProGen/ based on the project generator ProGen, incorporating numerous extensions of the classical resource-constrained project scheduling problem. Demeulemeester et al. [87] have developed an activity-on-the-node network generator RanGen which is able to generate a large amount of networks with a given order strength (discussed later). Due to an efficient recursive search algorithm, RanGen is able to generate project networks with exact predefined values for different topological structure measures. The network generator also takes the complexity index into account. Akkan et al. [88] have presented a constraint logic programming approach for the generation of acyclic directed graphs. Finally, Vanhoucke et al. [77] have adapted RanGen to an alternative RanGen2 network generator which will be used for the generation of the project networks of the studies that have led to the writing of this paper. It is based on the RiskNet generator of Tavares [85]. Neither of the networks generated by the previously mentioned network generators can be called strongly random because they do not guarantee that the topology is a random selection from the space of all possible networks which satisfy the specified input parameters.

Next to the generation of project networks, numerous researchers have spent attention on the topological structure of a project network. The topological structure of a network can be calculated in various ways. Probably the best known measure for the topological structure of activity-on-the-arc networks is the coefficient of network complexity (CNC), defined by Pascoe [89] as the number of arcs over the number of nodes and redefined by Davies [90] and Kaimann [91, 92]. The measure has been adapted for activity-on-the-node problems by Davis [93] as the number of direct arcs over the number of activities (nodes) and has been used in the network generator ProGen [82]. Since the measure relies totally on the count of the activities and the direct arcs of the network and as it is easy to construct networks with an equal CNC value but a different degree of difficulty, Elmaghraby and Herroelen [76] questioned the usefulness of the suggested measure. De Reyck and Herroelen [94] and Herroelen and De Reyck [95] conclude that the correlation of the CNC with the complexity index is responsible for a number of misinterpretations with respect to the explanatory power of the CNC. Indeed, Kolisch et al. [82] and Alvarez-Valdes and Tamarit [96] had revealed that resource-constrained project scheduling networks become easier with increasing values of the CNC, without considering the underlying effect of the complexity index. In conclusion, the CNC, by itself, fails to discriminate between easy and hard project networks and can therefore not serve as a good measure for describing the impact of the network topology on the hardness of a project scheduling problem.

Another well-known measure of the topological structure of an AoN network is the order strength, OS [97], defined as the number of precedence relations including the transitive (When two direct or immediate precedence relations exist between activities and activities , then there is also an implicit transitive relation between activities .) ones but not including the arcs connecting the dummy start or end activity divided by the theoretical maximum number of precedence relations , where denotes the number of nondummy activities in the network. It is sometimes referred to as the density [98] or the restrictiveness [99] and is equal to 1 minus the flexibility ratio [100]. Herroelen and De Reyck [95] conclude that the order strength OS, the density, the restrictiveness, and the flexibility ratio constitute one and the same complexity measure. Schwindt [83] uses the order strength in the problem generator ProGen/Max and argues that this measure plays an important role in predicting the difficulty of different resource-constrained project scheduling problems. De Reyck [101] verified and confirmed the conjecture that the OS outperforms the complexity index as a measure of network complexity for the resource-constrained project scheduling problem.

The complexity index was originally defined by Bein et al. [102] for two-terminal acyclic activity-on-the-arc networks as the reduction complexity, that is, the minimum number of node reductions which—along with series and parallel reductions—allow to reduce a two-terminal acyclic network to a single edge. As a consequence, the complexity index measures the closeness of a network to a series-parallel directed graph. Their approach for computing the reduction complexity consists of two steps. First, they construct the so-called complexity graph by means of a dominator and a reverse-dominator tree. Second, they determine the minimal node cover through the use of the maximum flow procedure by Ford and Fulkerson [103]. De Reyck and Herroelen [94] adopted the reduction complexity as the definition of the complexity index of an activity network and have proven the complexity index to outperform other popular measures of performance, such as the CNC. Moreover, they also show that the OS, on its turn, outperforms the complexity index. These studies motivated the construction of an AoN problem generator for networks where both the order strength OS and the complexity index can be specified in advance, which has led to the development of the RanGen and RanGen2 generators (see earlier).

The topological structure of an activity-on-the-node network used in the three simulation studies is calculated based on four indicators initially proposed by Tavares et al. [79, 104] and further developed by Vanhoucke et al. [77]. These indicators serve as classifiers of project networks by controlling the design and structure of each individual project network. All indicators have been rescaled and lie between 0 and 1, inclusive, denoting the two extreme structures. The logic behind each indicator is straightforward and relies on general topological definitions from the project scheduling literature. Their intuitive meaning is briefly discussed along the following lines.(i)Serial/parallel indicator (SP) measures how closely the project network lies to a 100% parallel (SP = 0) or 100% serial (SP = 1) network. This indicator can be considered as a measure for the amount of critical and noncritical activities in a network and is based on the indicator proposed by Tavares et al. [79]. (ii)Activity distribution (AD) measures the distribution of the activities along the network from a uniform distribution across the project network (AD = 0) to a highly skewed distribution (e.g., a lot of activities in the beginning, followed by only a few activities near the end) (AD = 1). (iii)Length of arcs (LA) measures the length of each precedence relation between two activities as the distance between two activities in the project network. A project network can have many precedence relations between two activities lying far from each other (LA = 0), and hence most activities can be shifted further in the network. When all precedence relations have a length of one (LA = 1), all project activities have only immediate successors with little freedom to shift. (iv)Topological float (TF) measures the degrees of freedom for each activity as the difference between the progressive and regressive level [105] for each activity in the project network. TF = 0 when the network structure is 100% dense and no activities can be shifted within its structure. A network with TF = 1 consists of one serial chain of activities without topological float (this chain defines the SP value) while the remaining activities have a maximal float value.

3.2. Study 1: Schedule Risk Analysis

Schedule risk analysis (SRA) [106] is a technique that relies on the project control simulation algorithm presented in Section 2. It generates sensitivity metrics for project activities that express the relation between variation in the activity duration estimates and variation in the total project duration. The literature on sensitivity metrics for measuring the impact of variability in the project activity estimates is wide and diverse. Typically, many papers and handbooks mention the idea of using Monte Carlo simulations as the most accessible technique to estimate a project’s completion time distribution. These research papers often present simple metrics to measure a project’s sensitivity under various settings. Williams [70] reviews three important sensitivity measures to measure the criticality and/or sensitivity of project activities. The author shows illustrative examples for three sensitivity measures and mentions weaknesses for each metric. For each sensitivity metric, anomalies can occur which might lead to counter-intuitive results. For these reasons, numerous extensions that have been presented in the literature (partly) give an answer on these shortcomings and/or anomalies. Tavares et al. [80] present a surrogate indicator of criticality by using a regression model in order to offer a better alternative to the poor performance of the criticality index in predicting the impact of an activity delay on the total project duration. Kuchta [107] presents an alternative criticality index based on network information. However, no computational experiments have been performed to show the improvement of the new measure. In Elmaghraby [108], a short overview is given on the advantages and disadvantages of the three sensitivity measures discussed in Williams [70]. The author conjectures that a relative importance of project activities should be given by considering a combined version of these three sensitivity measures and reviews the more advanced studies that give partial answers on the mentioned shortcomings. More precisely, the paper reviews the research efforts related to the sensitivity of the mean and variance of a project’s total duration due to changes in the mean and variance of individual activities. Cho and Yum [109] propose an uncertainty importance measure to measure the effect of the variability in an activity’s duration on the variability of the overall project duration. Elmaghraby et al. [110] investigate the impact of changing the mean duration of an activity on the variability of the project duration. Finally, Gutierrez and Paul [111] present an analytical treatment of the effect of activity variance on the expected project duration. Motivated by the heavy computational burden of simulation techniques, various researchers have published analytical methods and/or approximation methods as a worthy alternative. An overview can be found in the study of Yao and Chu [112] and will not be discussed in the current research paper. Although not very recently published, another interesting reference related to this topic is the classified bibliography of research related to project risk management written by Williams [113]. A detailed study of all sensitivity extensions is outside the scope of this paper, and the reader is referred to the different sources mentioned above.

In this section, four sensitivity metrics for activity duration sensitivity will be used in the project control simulation study originally presented by Vanhoucke [73] and further discussed in Vanhoucke [63] and Vanhoucke [2]. Three of the four activity duration sensitivity measures have been presented in the criticality study in stochastic networks written by Williams [70], while a fourth sensitivity measure is based on the sensitivity issues published in PMBOK [114]. The four sensitivity metrics used in the simulation are described along the following lines.(i)Criticality index (CI) measures the probability that an activity lies on the critical path. (ii)Significance index (SI) measures the relative importance of an activity taking the expected activity and project duration into account as well as the activity slack. (iii)Schedule sensitivity index (SSI) measures the relative importance of an activity taking the CI as well as the standard deviations of the activity and project durations into account. (iv)Cruciality index (CRI) measures the correlation between the activity duration and the total project duration in three different ways: (a)CRI( ): Pearson’s product-moment correlation coefficient;(b)CRI( ): Spearman’s rank correlation coefficient;(c)CRI( ): Kendall’s tau rank correlation coefficient.

The aim of the study is to compare the four sensitivity metrics in a project control setting and to test their ability to distinguish between highly and lowly sensitive activities such that they can be used efficiently in a project control setting. Therefore, the scope of the study and the used measurement points and resulting output metrics can be summarised as follows.(i)Scope of the study is to compare and validate four well-known sensitivity metrics for activity duration variations. (ii)Measurement points are activity criticality, activity slack, variability in and correlations between the activity and project durations. (iii)Output metrics are values for the four sensitivity measures (6 values in total since three versions of CRI are used).

Figure 1 shows computational results of various experiments. The figure shows the six previously mentioned sensitivity metrics on the -axis and displays their values between 0 and 1 on the -axis. The size of the bubbles in the graphs is used to display the frequency of occurrence as the number of activities in the project network with such a value. The three graphs display results for projects with values of the SP indicator discussed in Section 3.1 equal to 0.25, 0.50, and 0.75.

The figure can be used to validate the discriminative power of the sensitivity metrics to make a distinction between highly sensitive activities (with a high expected impact) and the other less important activities that require much less attention. Ideally, the number of highly sensitive activities should be low such that only a small part of the project activities require attention while the others can be considered as safe. The criticality index and sensitivity index do not report very good results on that aspect for projects with a medium (SP = 50) to high (SP = 75) number of serial activities, since many (SP = 50) or most (SP = 75) activities are considered to be highly sensitive. The other sensitivity measures SSI and the three versions of CRI perform much better since they show higher upward tails with a low number of activities.

The CRI metric has a more or less equal distribution of the number of observations between the lowest and highest values, certainly for the SP = 50 and SP = 75 projects. The SSI clearly shows that a lot of activities are considered as less sensitive for SP = 25 and SP = 50 while only a few activities have much higher (i.e., sensitive) values. Consequently, the SSI and CRI metrics have a higher discriminative power compared to the SI and CI metrics. Similar findings have been reported in Vanhoucke [73].

It should be noted that the picture does not evaluate the sensitivity metrics on their ability to measure the real sensitivity of the project activities to forecast the real impact of activity duration changes on the project duration. Moreover, their applicability in a project control setting is also not incorporated in this figure. However, this topic is discussed in the project control experiments of Section 3.4.

3.3. Study 2: Forecasting Accuracy

In this section, a simulation study to measure the accuracy of two earned value management (EVM) methods and one earned schedule (ES) method to forecast the final duration of a project in progress is discussed, based on the work presented in Vanhoucke and Vandevoorde [115]. This study is a follow-up simulation study of the comparison made by Vandevoorde and Vanhoucke [116] where three forecasting methods have been discussed and validated on a small sample of empirical project data. Results of this simulation study have also been reported in follow-up papers published by Vanhoucke and Vandevoorde [117], Vanhoucke [118], Vanhoucke and Vandevoorde [119, 120], and Vanhoucke [121] and in the book by Vanhoucke [63] and have been validated using empirical project data from 8 Belgian companies from various sectors [122].

Earned value management is a methodology used to measure and communicate the real physical progress of a project and to integrate the three critical elements of project management (scope, time, and cost management). It takes into account the work completed, the time taken, and the costs incurred to complete the project and it helps to evaluate and control project risks by measuring project progress in monetary terms. The basic principles and the use in practice have been comprehensively described in many sources [123]. EVM relies on the schedule performance index (SPI) to measure the performance of the project duration during progress. Although EVM has been set up to follow up both time and cost, the majority of the research has been focused on the cost aspect (see e.g., the paper written by Fleming and Koppelman [124] who discuss EVM from a price tag point of view). In 2003, an alternative method, known as earned schedule has been proposed by Lipke [125] which relies on similar principles of EVM not only but also measures the time performance of projects in progress by an alternative schedule performance index SPI( ) that better measures the real-time performance of projects in progress.

The three forecasting methods to forecast the final project duration are known as the planned value method (PVM) [126], the earned duration method (EDM) [127, 128], and the earned schedule method (ESM) [125]. A prediction for the final project duration along the progress of the progress using one of these three methods is known as the estimated duration at completion, abbreviated by EAC( ). Each of the three methods can be used in three alternative ways, expressing the assumption about future expected project performance, resulting in 3 * 3 = 9 EAC( ) methods.

Unique to this simulation study is the use of the activity distribution functions to simulate activity variation, as discussed in Step 2 of the project control simulation algorithm of Section 2. The simple and easy to use triangular distribution is used to simulate activity duration variation, but its parameters are set in such a way that nine predefined simulation scenarios could be tested. These 9 simulation scenarios are defined based on two parameters. The first is the variation in activity durations that can be defined on the critical and or noncritical activities. A second parameter is the controlled performance measured along the simulation runs measured by the schedule performance index SPI( ) at periodic time intervals for each simulation run. The use of these two parameters results in 9 simulation scenarios that can be classified as follows.

True Scenarios. Five of the nine scenarios report an average project duration performance (ahead of schedule, on time, or delay) measured by the periodic SPI( ), and finally result in a real project duration that corresponds to the measured performance. These scenarios are called true scenarios since the measures performance metric SPI( ) measures the true outcome of the project.

Misleading Scenarios. Two of the nine scenarios are somewhat misleading since they measure a project ahead of schedule or a project delay, while the project finishes exactly on time.

False Scenarios. Two of the nine scenarios are simply wrong since the performance measurement of SPI( ) reports the complete opposite than the final outcome. When a project is reported to be ahead of schedule, it finishes late, while a warning for project delays turned out to result in a project finishing earlier than expected.

The reason why different simulation settings are used for critical versus noncritical activities lies at the heart of EVM and is based on the comments made by Jacob [127]. This author argues that the use of EVM and ES metrics on the project level is dangerous and might lead to wrong conclusions. The reason is that variation in noncritical activities has no real effect on the project duration but is nevertheless measured by the SPI and SPI( ) metrics on the project level and hence might give a false warning signal to the project manager. Consequently, the authors suggest to use the SPI and SPI( ) metrics on the activity level to avoid these errors, and certainly not at higher levels of the work breakdown structure (WBS). This concern has also been raised by other authors and has led to a discussion summarised in papers such as Book [129, 130], Jacob [131], and Lipke [132].

Although it is recognised that, at higher WBS levels, effects (delays) of nonperforming activities can be neutralised by well performing activities (ahead of schedule), which might result in masking potential problems, in the simulation study of this section, the advice of these authors has not been followed. Instead, in contradiction to the recommendations of Jacob [127], the SPI and SPI( ) indicators are nevertheless measured on the project level, realising that it might lead to wrong conclusions. Therefore, the aim of the study is to test what the impact of this error is on the forecasting accuracy when the performance measures are used at the highest WBS level (i.e., the project level). By splitting the scenarios between critical and noncritical activities, the simulation study can be used to test this known error and its impact on the forecasting accuracy. The reason why these recommendations are ignored is that it is believed that the only approach that can be taken by practitioners is indeed to measure performance on the project level. These measures are used up as early warning signals to detect problems and/or opportunities in an easy and efficient way at high levels in the WBS rather than a simple replacement of the critical path based scheduling tools. This early warning signal, if analysed properly, defines the need to eventually drill down into lower WBS levels. In conjunction with the project schedule, it allows to take corrective actions on activities that are in trouble (especially those tasks that are on the critical path). A similar observation has been made by Lipke et al. [133] who also note that detailed schedule analysis is a burdensome activity and, if performed, often can have disrupting effects on the project team. EVM offers calculation methods yielding reliable results on higher WBS levels, which greatly simplify final duration and completion date forecasting.

The scope of the study and the used measurement points and resulting output metrics can be summarised as follows: (i)Scope of the study is to compare and validate three EVM/ES techniques (PVM, EDM, and ESM) for forecasting the project duration. (ii)Measurement points are periodic performance metrics (SPI and SPI( )) and the resulting 9 forecasting methods (EAC( )). (iii)Output metrics are mean absolute percentage error (MAPE) and mean percentage error (MPE) to measure the accuracy of the three forecasting methods.

Table 1 presents partial results of the forecasting accuracy for the three methods (PVM, EDM, and ESM) along the completion stage of the project and for different project networks. The completion stage is measured as the percentage completed EV/BAC with EV the earned value and BAC the budget at completion. Early, middle, and late stages are defined as 0%, 30%, 30%, 70%, and 70%, 100% percentage completed, respectively. The project network structure is shown by the serial/parallel degree of a project and is measured by the SP indicator discussed in Section 3.1. The column with label “P” represents networks with most activities in parallel while the column with label “S” consists of project networks with mainly serial activities. The “S/P” column is a mix of both and contains both parallel and serial activities. The forecast accuracy is given in the body of the table. Since it is measured by the MAPE, lower numbers denote a higher forecast accuracy.

The table clearly shows that all three EVM/ES forecasting methods are more reliable as the number of serial activities increases. More serial projects have more critical activities, and hence, potential errors of project performance on high WBS levels are unlikely to happen since each delay on individual (critical) activities has a real effect on the project duration. Moreover, the ESM outperforms the PVM and EDM along all stages of completion to predict the duration of a project. The table also indicates that the accuracy of all EVM performance measures improves towards the final stages. However, the PVM shows a low accuracy at the final stages, due to the unreliable SPI trend. Indeed, it is known that the SPI goes to one, even for projects ending late, leading to biased results, which is not the case for the SPI( ) metric [63, 125].

3.4. Study 3: Project Control

The relevance of the two previous simulation studies lies in the ability of the two methods (SRA in Section 3.2 and EVM/ES in Section 3.3) to monitor and control projects and to generate warning signals for actions when the project runs out of control. In the third simulation study, the two previously mentioned methods are integrated into a dynamic project control system. The simulation is set up to test two alternative project control methods by using two types of dynamic information during project progress to improve corrective action decisions. Information on the sensitivity of individual project activities obtained through schedule risk analysis (SRA) as well as dynamic performance information obtained through earned value/schedule management (EVM/ES) will be dynamically used to steer the corrective action decision making process. The simulation study has been originally published by Vanhoucke [64] and further discussed in Vanhoucke [2, 3, 63]. Recently, a new study on integrating SRA with EVM/ES has been published by Elshaer [134].

The two alternative project control methods are considered from two extreme WBS level starting points. Although they represent a rather black-and-white view on project control, they can be considered as fundamentally different control approaches, both of which can be easily implemented is a less extreme way or can even be combined or mixed during project progress. Figure 2 graphically displays the two extreme control methods along the WBS level which are known as the top-down project control approach and a bottom-up project control approach. Details are given along the following lines.

Bottom-Up Project Control. The sensitivity metrics used in the study discussed in Section 3.2 are crucial to the project manager since they provide information about the sensitivity of activity duration variation and the expected impact on the project duration. This information is crucial to steer a project manager’s attention towards a subset of the project activities that have a highly expected effect on the overall project performance. These highly sensitive activities are subject to intensive control, while others require less or no attention during project execution. Since the activity information at the lowest level of the WBS is used to control the project and to take actions that should bring projects in danger back on track, this approach is called bottom bottom-up project control.

Top-Down Project Control. Project control using EVM/ES systems discussed in Section 3.3 offers the project manager a tool to calculate a quick and easy sanity check on the highest levels of the WBS, the project level. They provide early warning signals to detect problems and/or opportunities in an easy and efficient way that define the need to eventually drill down into lower WBS levels. In conjunction with the project schedule, it allows taking corrective actions on those activities that are in trouble (especially those tasks which lie on the critical path).

The scope of the study and the used measurement points and resulting output metrics can be summarised as follows.(i)Scope of the study is the comparison between top-down project control approach using EVM/ES and bottom-up project control approach using SRA. (ii)Measurement points include the number of control points as a proxy for the control effort and the result of corrective actions taken by the project manager as a proxy for the quality of the actions. (iii)Output metrics are the efficiency of both project control approaches defined as a comparison between the effort of controlling the project in progress and the results of the actions, as explained along the following lines.

Unique to this simulation study is the presence of corrective actions to bring projects in danger back on track. These simulated actions must be taken from the moment performance thresholds are exceeded. The specific threshold depends on the used control approach. For the bottom-up control approach, only highly sensitive activities are controlled, and hence, action thresholds are set on the values for the sensitivity metrics. As an example, from the moment the SSI is bigger than 70%, the activity is said to be highly sensitive, and it is expected that delays on this activity might have a significant impact on the total project duration. Therefore, it is better to carefully control this activity when it is in progress. Activities with a low SSI value, on the contrary, are considered to be safe and need no intensive control during progress (=lower effort). The top-down project control approach is done using schedule performance information at regular points in time, given by the SPI and SPI( ). From the moment these values drop below a certain predefined threshold, say for example, 70%, it is an indication that some of the underlying activities at the lowest WBS level might be in danger. Therefore, the project manager has to drill down (=increasing effort), trying to detect the problem and find out whether corrective actions are necessary to improve the current low performance.

Figure 3 displays a graphical representation of the simulation approach of the project control study. The dynamic simulation starts at the project start (time ) and gradually increases at each review period until the project is finished. At each control point , the necessary information is calculated or simulated, and once thresholds are exceeded, triggers for searching project problems and actions on the activity level might be performed.

The two alternative control methods show one important difference. In the top-down approach displayed at the left of the picture, all EVM performance metrics are calculated at each time period, and only when thresholds are exceeded, a drill down requires further attention in search for potential problems that might require action. In a bottom-up approach, displayed at the right of the picture, a subset of activities in progress, determined by the thresholds, is subject to control and might require actions in case of problems. Consequently, the selection of the set of activities that require intensive control in search of potential problems and the corresponding actions to bring problems back on track are different for the two control methods, as follows.(i)Top-down. At every time period, all EVM performance metrics are calculated, and when thresholds are exceeded, all activities in progress will be scanned in search for potential problems (and corresponding actions). Consequently, the search for project problems is triggered by thresholds on periodic EVM metrics and, once exceeded, is done on all activities in progress. (ii)Bottom-up. At every time period, all SRA sensitivity metrics are simulated, and all activities in progress are scanned for their values. Only a subset of these activities, those that exceed the thresholds, will be further analysed in search for problems (and corresponding actions). Consequently, the search for project problems is triggered by thresholds on activity sensitivity metrics and is performed only on a subset of those activities in progress with a value exceeding the threshold value.

Figure 4 shows an illustrative graph of this control efficiency for both control approaches. The -axis displays the closeness of each project to a complete serial or parallel network, as measured by the SP indicator discussed in Section 3.1. The -axis measures the control efficiency as follows.(i)The effort is measured by the number of control points in the simulation study. This number is equal to the number of times the action thresholds are exceeded. Indeed, from the moment a threshold is exceeded, the project manager must spend time to find out whether there is a problem during progress. Hence, the amount of control points is used as a proxy for the effort of control and depends on the value of the action thresholds. Obviously, the lower the effort, the higher the control efficiency, and hence the effort is set in the denominator of the control efficiency output metric. In Figure 3, the effort is measured by the number of activities that require intense control at each review period . For both approaches, this is equal to the number of time the “threshold exceed” block gives a “yes” answer. (ii)Return. When corrective actions are taken, their impact should bring projects in danger back on track and should therefore contribute to the overall success of the project. Therefore, the return of actions is measured as the difference between the project delay without actions and the project duration with actions. Obviously, the return of the actions can be considered as a proxy for the quality of the actions and should be set in the numerator of the project control efficiency output metric.

The graph clearly demonstrates that a top-down project-based control approach using the EVM/ES systems provides highly accurate results when the project network contains more serial activities. The bottom-up control approach using sensitivity information of activities obtained through a standard schedule risk analysis is particularly useful when projects contain a lot of parallel activities. This bottom-up approach requires subjective estimates of probability distributions to define the activity risk profiles, but it simplifies the control effort by focusing on those activities with a highly expected effect on the overall project objective.

4. Future Research

In this section, a short overview is given on the ideas for improvements on current project control systems and/or the development of novel techniques and further integration into integrated decision support systems in order to better control project in progress. Most of the ideas presented in this section consist of work in progress funded by the concerted research actions (CRAs) funding. This funding has resulted in a research project with duration of six years and started in 2012. This “more than a million euro” research project in collaboration with international universities and companies will certainly move the research in project management and dynamic scheduling towards a higher level.

4.1. Statistical Project Control

The project control approach of this paper is set up to indicate the direction of change in preliminary planning variables, set by the baseline schedule, compared with actual performance during project progress. In case the project performance of projects in progress deviates from the planned performance, a warning is indicated by the system in order to take corrective actions.

In the literature, various systems have been developed to measure deviations between planned and actual performance in terms of time and cost to trigger actions when thresholds are exceeded. Although the use of threshold values has been explained in the study of Section 3.4 to trigger the corrective action process, nothing has been said about the probability that real project problems occur once these threshold values are exceeded. Indeed, little research is done on the use and setting of these threshold values and their accuracy to timely detect real project problems. Therefore, it is believed that future research should point to this direction. The vast amount of data available during project progress should allow the project manager to use statistical techniques in order to improve the discriminative power between in-control and out-of-control project progress situations. The use of these so-called Statistical Project Control (SPC) systems should ideally lead to an improved ability to trigger actions when variation in a project’s progress exceeds certain predefined thresholds.

The use of statistical project control is not new in the literature and has been investigated by Lipke and Vaughn [135], Bauch and Chung [136], Wang et al. [137], Leu and Lin [138], and National Research Council [139]. These papers mainly focus on the use of statistical project control as an alternative for the Statistical Process Control used in manufacturing processes. Despite the fact that both approaches have the same abbreviation SPC, the statistical project control approach should be fundamentally different from the statistical process control [140]. Therefore, it is believed that the future research on SPC should go much further than the models and techniques presented in these papers. SPC should be a new approach to control projects based on the analysis of data that is generated before the start of the project (static) as well as during project progress (dynamic) [141]. This data analysis should allow the user to set automatic thresholds using multivariate statistics for EVM/ES systems and SRA systems in order to replace the often subjective action thresholds set by project managers based on wild guesses and experience. Fundamental research is therefore crucial to validate the novel statistical techniques to investigate their merits and pitfalls and to allow the development of project control decision support systems based on a sound methodology. Research on this relatively new project control topic is available in Colin and Vanhoucke [140, 142].

4.2. If Time Is Money, Accuracy Pays

The “if time is money, accuracy pays” [143] statement highlights the relevance and importance of accuracy in the simulation studies presented in this paper. Measuring and improving the accuracy of predictive methods to forecast the final duration of a project in progress using EVM/ES systems is crucial for project control to monitor the project time objectives, and since time is money, also the cost objectives. Recent research efforts in project duration forecasting have focused on improving the accuracy of forecasts by combining the existing EVM/ES forecasting techniques or even by borrowing principles from the traditional forecasting literature and adapting them to a project control setting. Although researchers have recommended combined forecasts for over half a century, their use in a project control environment is relatively new, and it is therefore believed that future research efforts should focus on forecasting improvement techniques. The use of composite forecasting methods or the extensions to, for example, Kalman filter [144] or Bayesian inference [145] are excellent examples of future research avenues for project duration forecasting.

However, the quality of forecasting metrics does not only depend on the average accuracy measured by the sum of absolute or relative errors over all review periods, but it also depends on the stability of the forecasts over the periods. Indeed, when project managers use the periodic forecasts to monitor and control the performance of projects in progress, it is very important to have a reliable value for each period such that actions can be taken based on the well-considered view on the forecasts over the last few periods. Stability is an important aspect in this control process since it avoids overreactions based on a single value for the forecast but instead puts focus on a series of forecasts having similar (stability) and reliable (accuracy) values. Various methods for assessing the stability of forecasts have been discussed in the literature, and an overview would fall outside the scope of this paper. It is however believed that these efforts can and will be used in a project control setting in future research efforts. Research studies to determine which of the two aspects of forecasting quality, accuracy, or stability is most important should be relevant for both academics and professionals. Future research should focus on further improvements of forecasting accuracy and stability and the possible trade-offs between these two quality dimensions. Stability studies in project management are not new. Studies on cost forecasting using EVM have been done by Christensen and Heise [146], and Christensen and Payne [147], among others. Time forecasting stability studies using ES are relatively new and are done by Henderson and Zwikael [148].

A third possible extension and future research direction in a project control setting is the relevance and importance of the baseline schedule. Since all EVM/ES performance metrics and forecasts are measured relative to the baseline schedule, the quality of the baseline schedule could be an important driver for forecasting accuracy/stability. The connection between the baseline schedule and the EVM/ES methodology can be analysed by a relatively new concept, known as the so-called schedule adherence and could potentially play an important role in this future research direction. The concept of schedule adherence is originally proposed by Lipke [149] as a simple extension of the earned schedule method resulting in the so-called p-factor. The p-factor is defined as the portion of earned value accrued in congruence with the baseline schedule, that is, the tasks which ought to be either completed or in progress. The rationale behind this new measure is that performing work not according to the baseline schedule often indicates activity impediments or is likely a cause of rework. The basic assumption behind this new approach lies in the idea that whenever impediments occur (activities that are performed relatively less efficiently compared to the project progress), resources are shifted from these constrained activities to other activities where they could gain earned value. However, this results in a project execution which deviates from the original baseline schedule. Consequently, this might involve a certain degree of risk, since the latter activities are performed without the necessary inputs and might result into a certain portion of rework. Up to today, the concept has not passed the test of logic yet, and future research will indicate whether it has merit in a control setting. To the best of our knowledge, the concept has only been preliminary analysed and investigated in a simulation study published in Vanhoucke [63, 150].

4.3. Research Meets Practice

A final future research avenue lies in the translation of academic research results into practical guidelines and rules-of-thumb that are relevant for professionals [151]. The research studies presented and written by the author of this paper have led to various outcomes that aim at bringing the academic world and the professional business world closer to each other. Some of the most relevant results are briefly discussed along the following lines.

The project scheduling game (PSG, http://www.protrack.be/psg) is an IT-supported simulation game to train young project management professionals the basic concepts of baseline scheduling, risk management, and project control. The business game is used in university and MBA trainings as well as in commercial trainings and allows the participant to get acquainted with the dynamic scheduling principles on a learning-by-doing way. References can be found in Vanhoucke et al. [152] and Wauters and Vanhoucke [153].

EVM Europe (http://www.evm-europe.eu/) is the European organisation to bring practitioners and researchers together to share new ideas, to stimulate innovative research, and to advance the state-of-the-art and best practices on project control. At EVM Europe, research meets practice at yearly conferences showcasing best practices and research results and trying to tighten the gap between the two project management worlds.

PM Knowledge Center (PMKC, http://www.pmknowledgecenter.com/) is a free and online learning tool to stimulate interaction between researchers, students, and practitioners in the field of project management. It contains papers and reference resources to inform and improve the practice of dynamic scheduling and integrated project control.

In the book “The Art of Project Management: A Story about Work and Passion” [154], an overview is given about the recent endeavours done in the past and the ideas that will be done in the future. It tells about the products and ideas in project management and provides a brief overview of the most important people who inspired the author of the current paper for the work that has been done in the past. It does not look at the Project Management work from only a research point-of-view, but also from a teaching and commercial point-of-view. It tells about work, and the passion that has led to the results of the hard work. It is not a scientific book. It is not a managerial book either. It is just a story about work and passion.

5. Conclusions

In this paper, an overview of recent research results and future research avenues is given for a specific topic on project management and scheduling research using simulations. It is shown that the simulation studies of this paper fit in the research domain of dynamic scheduling, which refers to a dynamic and integrated approach on baseline scheduling, risk analysis, and project control. The focus of this paper lies on the integration between risk analysis and project control, and the baseline scheduling step is considered given.

A simple yet easy to use project control simulation algorithm is presented consisting of 4 steps, including the construction of a project baseline schedule. A nonexhaustive literature overview on baseline scheduling is given in the paper using different scheduling objectives. The definition of activity variation on the activity durations is the central starting point in this paper, and hence all the discussed simulation studies only report results for time performance of projects in progress.

Three simulation studies have been discussed in the paper, based on numerous research projects done in the past and published throughout the literature in academic papers, popular magazines, websites, and books. In a first schedule risk analysis study, four well-known metrics to measure the sensitivity of variation in activity durations are compared and validated, and their ability to make a distinction between activities with a low and high expected impact on the project duration is analysed. A second forecasting accuracy study focuses on three predictive methods using earned value management and earned schedule and compares the absolute and relative errors of the forecasting methods. A last project control study integrates the two previous studies in an action-oriented project control framework and compares two alternative control methods, known as bottom-up and top-down control and measures their efficiency.

Finally, three directions for future research avenues are briefly discussed. First, the overwhelming amount of available data to control projects should lead to improved statistical control techniques that ultimately should lead to automatic decision support systems to better steer the actions taken by project managers. Moreover, improving the accuracy of existing or new techniques and extending the studies in stability and schedule adherence will probably contribute to a better understanding of the real drivers of project control and success. Finally, the necessity of bringing the often separate worlds of research and practice closer to each other is a never-ending task and challenge for both researchers and professionals, in order to let the Project Management discipline move forwards.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The summary of many of the research papers discussed in this paper has been funded by various research funding organisations. Therefore, the author acknowledges the national support given by the “Fonds voor Wetenschappelijk Onderzoek” (FWO) for the projects under Contract nos. G/0194/06, G/0095.10N, and 3G015711, as well as the support of the “Bijzonder Onderzoeksfonds” (BOF) for the projects under Contract nos. 01110003 and BOF12GOA021. Furthermore, the support by the Research Collaboration Fund of PMI Belgium received in Brussels, in 2007, at the Belgian Chapter Meeting, the support for the research project funding by the Flemish Government (2008), Belgium, and the additional funding of the National Bank of Belgium are also acknowledged. This research is part of the IPMA Research Award 2008 project by Mario Vanhoucke who was awarded at the 22nd World Congress in Rome, Italy, for his study “Measuring Time—An Earned Value Simulation Study.” An overview of the research results done at the “Operations Research and Scheduling” Group can be found in Vanhoucke [155].