Abstract

Estimation is an important part of software engineering projects, and the ability to produce accurate effort estimates has an impact on key economic processes, including budgeting and bid proposals and deciding the execution boundaries of the project. Work in this paper explores the interrelationship among different dimensions of software projects, namely, project size, effort, and effort influencing factors. The study aims at providing better effort estimate on the parameters of modified COCOMO along with the detailed use of binary genetic algorithm as a novel optimization algorithm. Significance of 15 cost drivers can be shown by their impact on MMRE of efforts on original 63 NASA datasets. Proposed method is producing tuned values of the cost drivers, which are effective enough to improve the productivity of the projects. Prediction at different levels of MRE for each project reflects the percentage of projects with desired accuracy. Furthermore, this model is validated on two different datasets which represents better estimation accuracy as compared to the COCOMO 81 based NASA 63 and NASA 93 datasets.

1. Introduction

Estimation is an important part of software engineering projects, and the ability to produce accurate effort estimates has an impact on key economic processes, including budgeting and bid proposals and deciding the execution boundaries of the project [1]. Effort estimation is a critical activity for planning and monitoring software project development and for delivering the product on time and within budget. Also, feasibility of project in terms of cost and ability to meet customer’s requirements is considered in the process of estimation [2]. The prediction of the effort to be consumed in a software project is, probably, the most sought after variable in the process of project management. The determination of the value of this variable in the early stages of a software project drives the planning of remaining activities. The estimation activity is plagued with uncertainties and obstacles, and the measurement of past projects is a necessary step for solving the question. The problem of accurate effort estimation is still open and the project manager is confronted at the beginning of the project with the same quagmires as a few years ago [3]. The software industry’s inability to provide accurate estimates of development cost, effort, and/or time is well known [4].

Over the past few years, software development effort is found to be one of the worst estimated attributes. Significant over- or underestimates can be very expensive for company and the competitiveness of a software company heavily depends on the ability of its project managers to accurately predict in advance the effort required to develop the software systems [5]. It is also found that efforts need to be estimated reliably in order to complete the projects on time and within budget as less than one-quarter of the projects is estimated accurately.

Many model structures evolved in the literature and these structures consider modeling relationship between software effort, developed line of code (DLOC), and influencing factors:

Building such a relationship as a function helps project managers to accurately allocate the available resources for the project [6]. Among the others, Constructive Cost Model (COCOMO) is a widely known effort estimation model where developed lines of code (DLOC) are the primary element which affects the effort estimation. The DLOC include all program instructions and formal statements [6, 7]. The aim here is to provide a basis for the software effort estimation through a systematic review of previous research papers [8]. Some research studies have demonstrated that the level of accuracy in software effort estimates is strongly influenced by selection of the input values of the parameters of these methods. Combination of input features selection and parameters optimization of machine learning methods improves the accuracy of software development effort [9].

Recently, the uses of search based methods have been suggested to address the software development effort estimation problem [10, 11]. Such a problem can be formulated as an optimization problem where we have to identify the estimation model which provides the best prediction [5]. This study aims at providing the better effort estimate on the parameters of modified COCOMO along with the detailed use of binary genetic algorithm as a novel optimization algorithm. The performance in terms of estimation accuracy of the developed model was tested on 93 and 63 NASA dataset projects and compared to the preexisting COCOMO. The developed model is able to provide better estimation capabilities.

The whole paper is organized in 7 sections. Section 2 illustrates the problem and the techniques as a part of problem, Section 3 depicts the solution approach, and Section 4 describes the proposed algorithm for solving the problem. Four submodels are introduced in this section. Evaluation criteria and data analysis are discussed in Section 5. Result analysis is made with help of proposed method in Section 6. Finally the paper has been concluded in Section 7.

2. Problem Illustration

2.1. Problem Statement

Software development effort estimates are likely to be highly inaccurate and systematically overoptimistic due to the valence effect of prediction, anchoring, and planning fallacy and cognitive effects. Empirical evidence suggests that the causes of the problem, to some extent, were due to the influence of irrelevant and misleading information, for example, information regarding the client’s budget, present in the estimation material [12]. Previous researches have shown that the average effort overrun in software development projects is about 30%–40% [1, 4]. Estimating techniques have emerged continually, and attempts have been made to compare these techniques and derive the best practices [1, 4, 13].

Empirical software estimation models are mainly based on cost drivers and scale factors. These models show the problem of instability due to values of the cost drivers and scale factors, thus affecting the sensitivity in terms of accurate effort estimation. Also, most of the models depend on the size of the project and a small change in the size leads to the proportionate change in the effort. Miscalculations of the cost drivers have even more noisy data as a result too. For example, a misjudgment in personnel capability cost driver in COCOMO between “very high to very low” will result in 300% increase in effort. Similarly in SEER-SEM, changing security requirements values from “low” to “high” will result in 400% increase in effort. In PRICE-S, 20% change in effort will occur due to small change in the value of the productivity factor [14]. Above statements reveal that, all models have one or more inputs for which small changes will result in large changes in effort. The input data problem is further compounded in that some inputs are difficult to obtain, especially early stages in a program development. The size must be estimated early in a project using one or more sizing models. Some sensitive inputs, such as analyst and programmer capability in cost drivers, are based on individual and are often difficult to determine. Many studies like the one performed by [15] show that personnel parameter data are difficult to collect.

2.2. Algorithmic Models

Many software estimation models have been proposed by various researchers and can be categorized according to their basic formulation schemes: analogy based estimation schemes [1619], expert-judgment estimation [20], and algorithmic models including empirical methods [21], rule induction methods [22], Bayesian network approaches [23], decision tree based methods [24], artificial neural network based approaches [25, 26], and fuzzy logic based estimation schemes [27].

Some of the famous algorithmic models among these diversified models, COCOMO, SLIM, SEER-SEM, and FP analysis methods, are very much popular in practice in the empirical category [28], while COCOMO and Function Points allow us to guess the size (in KLOC) of the software ourselves. Albrecht observed in his research that Function Points were highly correlated to lines of code, so in effect they are complementary [29]. Function Points calculate the logical source lines of codes and COCOMO is based on physical source lines of codes. These empirical models work with certain inputs, accurate estimate of specific attributes, such as source lines of code (SLOC), multiplicative factors, interfaces and complexity, and number of user screens, which are not always easy to acquire during the early stage of software project development. Models based on historical data have limitations. Understanding and calculation of these models are difficult because inherent complex relationships between the related attributes to predict software development effort could change over time and/or differ for software development environments [26]. They are unable to handle categorical data as well as lack reasoning capabilities [30].

The limitations in algorithmic models have led to the exploration of nonalgorithmic models which are soft computing based [30].

2.3. Constructive Cost Model

The original Constructive Cost Model abbreviated as COCOMO was first published by Dr. Barry Boehm in 1981. The word “constructive” prevails that the complexity of the model can easily be understood due to the openness of the model, which exhibits exactly why the model gives the estimates. Since the inception of the software development techniques, many efforts were done in the improvement of estimation; COCOMO is the best documented, most transparent and reflects the software development practices of these days. The main focus in COCOMO is upon the estimation of the influence of 15 cost drivers on the development effort cost. The model does not support project management in estimating the size of the software. COCOMO has been derived from a database of 63 projects, executed between 1964 and 1979 by the American Company TRW Systems Inc. The projects considered during this time era were differing strongly in type of their application, size, and programming language [31].

Boehm introduced three levels of the estimation model: basic, intermediate, and detailed.(i)The basic COCOMO 81 is a single-valued, static model which provides an approximate estimation of software development effort and cost as a function of program size expressed in thousand delivered source of instructions (KDSI).(ii)The intermediate COCOMO 81 describes software development effort as a function of program size in LOC and a set of fifteen “effort multipliers known as cost drivers.” These cost drivers incorporate subjective assessments of product, project, personnel, and hardware attributes.(iii)The advanced or detailed COCOMO 81 reduces the margin of error in the final estimate by incorporating all characteristics of the intermediate version with the determination of the cost driver’s impact on each step, that is, analysis and design of the software engineering process.

COCOMO assumes that the effort grows more than linearly with software size. The value of few multipliers is required to be increased to decrease the effort. For few other multipliers, the values are required to be decreased to decrease the effort; that is, . Here, and are domain-specific parameters, KSLOC denotes kilo source lines of code which is estimated directly or computed from a function point analysis, and is the product of fifteen effort multipliers (EMi); here to 15.

So the following equation can be represented as:

3. Solution to the Problem

3.1. Nonalgorithmic Models

Contrary to the algorithmic models, since inception in 1990s the proposed nonalgorithmic models are based on computational intelligence, analytical comparisons, and inferences to project cost estimation. They have the capability to model the complex set of relationship between the dependent variables (cost, effort) and the independent variables (cost drivers) collected earlier in the project lifecycle and to learn from historical projects data. For using the nonalgorithmic models, information about those previous projects datasets is required which are similar to the projects under estimate. Usually, in these methods estimation process is done according to the analysis of the historical datasets. Many software researchers have shown their interest in the research to new approaches of nonalgorithmic models that are based on soft computing, that is, artificial neural networks, fuzzy logic and evolutionary algorithms. These methods are being used for the assessing because of their popularity and a large number of papers about their usage have been published in the recent past years [26, 3234]. Decision of choosing a suitable technique is a difficult one and requires the support of a well-defined evaluation scheme to rank each evolutionary computation technique as and when it is applied to any optimization problem. In this present research study an effective model based on evolutionary computation has been proposed to overcome the problem of uncertainty and to acquire better results.

3.2. Genetic Algorithms

Evolutionary computational methods are generally used in software engineering methodologies such as test case generation [35, 36], effort estimation, cost estimation, and many more. Genetic algorithms are a simple and almost generic evolutionary computational method inspired by Darwin’s theory of natural evolution to solve the complex optimization problems. Genetic algorithm requires a careful and suitable selection of parent’s selection methods, mutation methods, population size, and so forth, to find good solutions. If improper parameters and methods are chosen, there may have longer program runs or even bad optimization results [37]. In nature, competition among individuals for scanty resources results in the fittest individuals dominating over the weakest ones [24, 38].

3.2.1. Working Principle

(i)Genetic algorithm starts with randomly generated initial population as a set of solutions, which are represented by chromosomes.(ii)The algorithm then generates a sequence of individuals as new population. At each iteration, the algorithm uses the individuals of current generation to create the next generation of population. To create the new population, the algorithm works with the following steps.(a)Score each individual member of the current population by computing its fitness value.(b)Scale the raw fitness scores to convert them into a more desired range of values.(c)Select the good individuals, called parents, based on the value of fitness function.(d)Few of the individuals in the current population that are having lower fitness are selected as elite. These individuals are directly sent to the next generation of population for elitism.(e)Produce offsprings from the parents. Offsprings are produced either by making mutation of a single parent by combining the chromosome of a pair of parents with the help of crossover operator.(f)Update the current population with the offsprings to form the new generation.(iii)The algorithm terminates only on the condition that any one of the stopping criteria is reached, that is, number of generations or desired fitness value.

4. Proposed Algorithm for Solving the Problem

Algorithm Description (see Algorithm 1). Proposed algorithm is divided into 4 submodels. In submodel 1, we calculate the mean magnitude of relative error (MMRE) of all projects according to the results obtained by COCOMO. Here, we first calculate the estimated efforts by considering 15 COCOMO cost drivers, project modes, and kilo lines of codes (KLOC). Estimated efforts along with actual efforts produced in various projects are used as the input parameters to calculate the mean of relative error (MRE) of each project. MMRE for COCOMO results is recorded as the original MMRE.

Sub-Model  1:
Step  1. Generate the MMRE (M) for Available Projects using actual and COCOMO estimated efforts.
(i)[BEGIN]
(ii)  Input the 15 cost drivers, KLOC, Actual Effort for NASA projects.
(iii) [LOOP]
for to no. of projects (say )
EAF[ ] = D1 * D2 ** D15
Estimated Effort[ ] = [ ] * (kloc[ ] [ ]) * EAF[ ]
MRE[ ] = /Actual Effort[ ]
MMRE (original) += MRE[ ]
MMRE (original) /= [The original MMRE is obtained and noted down]
(iv) [END OF LOOP]
(v)   [END]
Sub-Model 2:
Step  2. for to 15
temp = Emi
Set Emi = 1
Calculate Influenced MMRE(MN)
List[ ] = ;
List[ ] = MN~M;
Emi = temp;
end for
Sub-Model 3:
Step  3. Sort the list according to the second parameter in descending order
For to 14
For to 15
If (list[ ] < list[ ])
then
swap (list[ ], list[ ])
end if
end for
end for
Step  4. Sig = list[ ] represent the order of Significance occurrences.
Sub-Model 4:
Step  5. for to 15
for = very low to Extra high (Six rating of cost driver)
Select Projects (P) as an input for calculating the fitness value using fitness function F1 = MMRE(P).
Set the range R as {Rmax, Rmin}
Generate initial population for the cost driver with Range R.
performs The Genetic operations for K generations.
(1) Tournament Selection
(2) Crossover with Pc = 0.8
(3) Mutation with Pm = 0.3
Select the individual (CDNEW) with the best MMRE
Step  6. Calculate the MMRE(Mmod) by replacing CDNEW with CDij
if (Mmod < M)
then update the value of CDij and M.
else
discard the value
end if
end for
end for

In submodel 2, influenced MMRE is calculated on the basis of occurrences of 15 cost drivers. This influenced MMRE shows the effectiveness of each cost driver in the sequence of development of efforts in terms of person-months. In this process, we take sample data having 18 input parameters, that is, 15 cost drivers, modes, source lines, and actual effort. The estimated efforts are calculated for the sample data by nullifying the effect of cost driver one by one. These efforts are used to calculate the influenced MMRE for each cost driver corresponding to the actual effort provided in the sample data. The difference between influenced MMRE and original MMRE is recorded in the list along with driver.

In submodel 3, the list with the difference of influenced MMRE and original MMRE is sorted in the descending order of the difference to provide the significant occurrence of the driver. This order has been named as Sig.

In submodel 4 we will try to minimize the MMRE by updating the value of cost driver with the help of genetic algorithm in the order of their significance. This is done by selecting the projects falling in the category of particular cost driver and then using the genetic algorithm operator. The results obtained are evaluated using the fitness function as MMRE. If the MMRE is reduced, the cost driver value for particular rating is updated, otherwise discarded. The reduced MMRE will be recorded as Mmod which will be used as the MMRE for the remaining cost drivers.

5. Evaluation Method

5.1. Conceptual View

Software cost estimation models need to be quantitatively evaluated in terms of estimation accuracy to improve the modeling process. Some rules or the measurements must be provided for model assessment purpose. This measurement of accuracy defines how close the estimated result is with its actual value. Software cost estimates play significant role in delivering software projects. As a result, researchers have proposed the most widely used evaluation criterion to assess the performance of software prediction models, that is, the mean magnitude of relative error (MMRE), to evaluate the opulence of prediction systems. MMRE is usually computed by following standard evaluation processes such as cross-validation [39]. It is independent of size scale and effort units. Comparisons can be made across data sets and prediction model types [40].

COCOMO computes effort on the basis of source lines of codes. In intermediate COCOMO, Boehm used 15 more predictor variables called cost drivers, which are required to calibrate the nominal effort of a project to the actual project environment. The values are set to each cost driver according to the properties of the specific software project. These numerical values of 15 cost drivers are multiplied to get the effort adjustment factor, that is, EAF.

5.2. Data Analysis

Performance of estimation methods is usually evaluated by several ratio measurements of accuracy metrics including RE (relative error), MRE (magnitude of relative error), and MMRE (mean magnitude of relative error) which are computed as follows:

Another parameter used in evaluation of performance of estimation method is PRED (percentage of prediction) which is determined as

where is the number of projects with MRE less than or equal to level and is the number of considered projects. Usually, the acceptable level of is 0.25 and the various methods are compared based on this level. Decreasing of MMRE and increasing of PRED are the main aim of all estimation techniques.

6. Results and Discussion

6.1. Dataset Description

Experiments were done by taking 63 COCOMO 81 based dataset used by NASA and various other calculations performed on it. 93 NASA projects from different centers for projects from the years of 1971 to 1987 were collected by Jairus Hihn, JPL, NASA, Manager of SQIP Measurement and Benchmarking Element. The proposed model is validated by these datasets. These are one of the most analyzed data sets. The independent variable used is “adjusted delivered source instructions,” which takes into account the variation of effort when adapting software. COCOMO is built upon these data points, by introducing many factors in the form of multipliers.

These datasets include 156 historical projects with 17 effort drivers and one dependent variable of the software development effort.

6.2. Result Analysis

Cost drivers play a vital role in estimation of the efforts and cost to be incurred. They show characteristics of software development that influence effort in carrying out a certain project. Cost drivers are selected based on the arguments that they have a linear effect on effort. COCOMO cost drivers are the basis for the analysis of proposed algorithm. Table 1 depicts the COCOMO effort multipliers.

Significance of 15 cost drivers can be shown by their impact on MMRE of efforts on original 63 NASA datasets. The significance occurrences of 15 cost drivers are calculated by applying step  1 to step  4 which are shown in Table 2.

The occurrence of each cost driver is having linearity with the MMRE calculated between actual efforts produced and estimated effort with COCOMO. In Figure 1, each cost driver moves against MMRE that is constant for all cases. The effect of each cost driver on the MMRE is the significant aspect of deriving the occurrence of cost drivers. The proportionate relationship can be seen from Figure 1, where the higher influenced MMRE with each independent cost driver against constant value of MMRE is the most significant and those with lower values are less significant.

Once significant occurrences of the cost drivers are found, the sequence of cost drivers is used to produce tuned values for different ratings of various cost drivers. Step  5 and step  6 are used to generate the new values of available cost drivers. Table 3 reveals the tuned values in preexisting cost drivers.

The proposed algorithm is validated with two different datasets of NASA projects. According to the evaluation criteria, the proposed method has marginal difference in efforts with actual project efforts in comparison to COCOMO generated efforts, shown in Figure 2. Most of the results are kept near to the mean of MRE for 63 data values. Other 93 datasets were also used to evaluate the projects with the proposed method (Figure 3 and Figure 4).

A comparison is made between proposed method and other estimation methods by MMRE in Table 4. Proposed method is having average error 0.27 with actual efforts and COCOMO produces a bit higher percentage of error with actual efforts. Proposed model is working efficiently for other 93 datasets as well.

Essentially, we want to measure useful functionality produced per time unit. Productivity is another measurement of effectiveness of the model. It is a measure of the rate or ratio at which individual software developers involved in software development produce software and associated documentation.

Higher productivity reflects the better quality achievement for the project development. Proposed method is having productivity 0.29 which is closer to the actual efforts 0.27 as productivity. Seven percent of proposed method productivity is increased and 9 percent of COCOMO productivity is decreased in comparison with actual productivity (Table 5). So the percentage of difference between proposed method and COCOMO results is approximately 17.95.

Tables 6 and 7 depict the presence of error in all three categories of project modes for two different types of datasets. The comparison was made between proposed model generated results versus COCOMO results. We also evaluate the different type of project application categorically; 80% of total datasets are producing the results which are better than the COCOMO based results (Table 8).

PRED was calculated with the two separate approaches and Table 9 depicts that, for 3 different PRED assumptions, proposed method is producing approximately 6.665%, 8.01%, and 8.34% increase in PRED, respectively.

7. Conclusion

Work carried out in the paper explores the inter-relationship among different dimensions of data driven software projects, namely, project size and effort. The above-mentioned results demonstrate that applying proposed method to the software effort estimation is by far the most feasible approach for addressing the problem of apprehension and ambiguity existing in software effort drivers. Order of occurrence of various cost drivers has a significant impact on overall efforts in project estimation. Small adjustments to the COCOMO cost drivers bring significant improvements to the quality criteria applied to the proposed approach. Proposed method is producing tuned values of the cost drivers, which are effective enough to improve the productivity of the projects. Prediction at different levels of MRE for each project reflects the percentage of projects with desired accuracy. Furthermore, this model is validated on two different datasets which represents better estimation accuracy as compared to the COCOMO 81 based NASA 63 and NASA 93 datasets. The utilization of proposed algorithm for other applications in the software engineering field can also be explored in the future.

Conflict of Interests

The authors certify that there is no actual or potential conflict of interests in relation to this paper. The American Company TRW Systems Inc. has been referred to as the company where Barry W. Boehm, the developer of COCOMO, worked.