Mathematical Problems in Engineering

Volume 2016 (2016), Article ID 2476584, 13 pages

http://dx.doi.org/10.1155/2016/2476584

## Software Reliability Growth Model with Partial Differential Equation for Various Debugging Processes

School of Computer Science and Engineering, Beihang University, Beijing 100191, China

Received 19 September 2015; Revised 8 December 2015; Accepted 20 December 2015

Academic Editor: Jean-Christophe Ponsart

Copyright © 2016 Jiajun Xu and Shuzhen Yao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Most Software Reliability Growth Models (SRGMs) based on the Nonhomogeneous Poisson Process (NHPP) generally assume perfect or imperfect debugging. However, environmental factors introduce great uncertainty for SRGMs in the development and testing phase. We propose a novel NHPP model based on partial differential equation (PDE), to quantify the uncertainties associated with perfect or imperfect debugging process. We represent the environmental uncertainties collectively as a noise of arbitrary correlation. Under the new stochastic framework, one could compute the full statistical information of the debugging process, for example, its probabilistic density function (PDF). Through a number of comparisons with historical data and existing methods, such as the classic NHPP model, the proposed model exhibits a closer fitting to observation. In addition to conventional focus on the mean value of fault detection, the newly derived full statistical information could further help software developers make decisions on system maintenance and risk assessment.

#### 1. Introduction

Software reliability, defined as the probability of failure-free operation under certain conditions and by specific time [1], is one of the significant attributes of software systems development life cycle. As the software systems mature with ever growing complexity, evaluation, prediction, and improvement of their reliability become a crucial and daunting task for developers in both the development and testing phases. Numerous Software Reliability Growth Models (SRGMs) have been developed [2–5], which generally agree that the fault debugging process is a Nonhomogeneous Poisson Process (NHPP) under various diverging assumptions: perfect and imperfect debugging phenomenon [6, 7].

A common assumption is that removing detected faults will not introduce new faults, usually called perfect debugging phenomenon. Based on earlier works of Jelinski and Moranda [8], Goel and Okumoto [9] developed the exponential Software Reliability Growth Models (G-O model) with a constant fault detection rate. In latter models, the smoothly changeable fault detection rates are incorporated. Yamada et al. [10] found the fault detection rate fit to a S-shape growth curve and proposed the delay S-shaped SRGM. Employing the learning phenomenon in software fault detection process, Ohba [11] later developed an alternative inflection S-shaped SRGM. Further work was conducted by Yamada and others [12] by incorporating the relationship between the work effort and the number of faults detected into the model. Huang et al. [13] stated that fault detection and repair process are different in software development and operation. They proposed a unified theory to merge multiple moves into a NHPP-based SRGM and found a regularly changeable fault detection rate in the detection process. Later on Huang and Lyu [6] also used multiple change points to deal with imperfect debugging problems. The common assumption is that the removal of the detected faults will not introduce new faults. This scenario is known as perfect debugging. However, given the complexity of the software debugging process, debugging activity may be imperfect in practical software development environment and thus those perfect models may oversimplify the underlying dynamics.

To solve such problems, imperfect debugging processes are incorporated in latter models. Whenever the software system is developed, new errors can be introduced into the software during development/testing phase and faults may not be corrected perfectly. This phenomenon is known as imperfect debugging. According to the above two phenomenons, SRGMs can be divided into two categories: perfect and imperfect debugging model. Kapur and Garg [14] discussed their fault removal phenomenon that as the team gains more experience, they may remove detected faults without causing new ones. However, the testing team may not correct a fault perfectly and the original fault may remain or generate new faults, leading to a phenomenon known as imperfect debugging. It was Goel [15] who first introduced the concept of imperfect debugging. Latter, Ohba and Chou [16] developed an error generation model based on the G-O model, named as an imperfect debugging model. Pham et al. [17] proposed a general imperfect software debugging model (P-N-Z model). Pham [18] also developed an SRGM for multiple failure types incorporating error generation. Zhang et al. [19] proposed a testing efficiency model that includes both imperfect debugging and error generation. Kapur et al. [20] proposed a flexible SRGM with imperfect debugging and error generation using a logistic function for the fault detection rate, which reflects the efficiency of the testing and removal team. Employing the learning phenomenon in software fault detection process, Yamada et al. [21] later developed an imperfect debugging model. In imperfect software debugging models, the fault introduction rate per fault is generally assumed to be constant or decreasing changes over time [22]. Recently, Wang et al. [22] also proposed a model to represent the imperfect debugging process with a log-logistic distribution. To sum up, most models presume a deterministic relationship between the cumulative number of detected faults and the time span of the software fault debugging process.

However, software debugging is a stochastic and uncertain process, which can be effected by many environmental factors, such as the distribution of resources, strategy, and running environment [23]. In particular, in the uncertain network environment, the aforementioned assumptions of deterministic model would become problematic. The environmental noise introduces great uncertainties that significantly affects traditional debugging process. To address such challenges, new stochastic models [24–27] were proposed, which treat the debugging process as perfect and stochastic; they assume each failure occurrence is independent and follows identical random distribution. By employing the white noise, that is, temporally uncorrelated random variables, for the environmental factors collectively, they used a flexible stochastic differential equation to model the irregular changes. Comparing to conventional models, such white-noise-based approach assumes the perfect debugging and no doubt provides closer approximation to uncertain fluctuations in reality but with great mathematical simplicity. Debugging activity is usually imperfect in piratical software development and recent data [28, 29] show that the fault detection is highly susceptible to noise and is generally correlated; thus earlier assumptions become problematic. Thus, because of its mathematical simplicity, it may also considerably underestimate the imperfect debugging process and the temporal correlation in a dynamic environment.

In this study, we propose an alternative stochastic framework based on partial differential equation (PDE), to describe the perfect/imperfect debugging processes, in which the environmental noise is of arbitrary correlation. Details of the model is provided in Section 2 with an equation for the probabilistic density function (PDF) of system states. In Section 3, we validate our approach with both historical failure data and results from conventional methods; main features of the model, including the temporal evolutions of its full statistical information, are later addressed in Section 4. The final conclusions are given in Section 5.

#### 2. Problem Formulation

##### 2.1. Model Formulation

Widely used in many practical applications, NHPP model assumes the following:(1)the fault debugging process is modeled as a stochastic process within a continuous state space;(2)the system is subject to failures at random times caused by the remaining errors in the system;(3)the mean number of detected faults in a time interval is proportional to the mean number of remaining faults;(4)when a fault is detected, it may be removed with the probability ;(5)when a fault is removed, new faults may be generated with a constant probability .

In order to describe the stochastic and uncertain behavior of the fault debugging process, in a continuous state space, the temporal evolution of is routinely described as [12]where is the expected number of faults detected in the time interval , denotes the number of total faults, and is a nonnegative function representing the fault detection rate per remaining fault at testing time . We further note that is the number of newly introduced bugs, while means the number of successful removal; together they represent a (perfect/imperfect) debugging process in which probabilities and are assigned [30–32]. Without a loss of generality, we denote to facilitate subsequent presentation and indicates a perfect debugging process.

Since is subject to a number of random environmental effects [23], we follow the previous studies [24, 25, 27] and represent as a sum of its mean value and a zero-mean (random) fluctuation :

By substituting (2) into (1), we obtain a stochastic differential equation:

In contrast to earlier works of [24, 25, 27] which approximate the fluctuations term as a white noise, we relax such assumption and denote the noise’s two-point covariance more broadly aswhere represents the strength of the noise, denotes its correlation function, and indicate two temporal points, and is the noise correlation time.

##### 2.2. PDF Method

Our goal is to derive an equation for the probabilistic density function of , that is, . To be specific, we adopt the PDF method originally developed in the context of turbulent flow [33] and use the Dirac delta function to define a “raw” PDF of system states at time :whose ensemble average over random realisations of yields

Following recent study on PDF method [34], we multiply the original equation (3) with and apply the properties of the Dirac delta function . This would yield the stochastic evolution equation for in the probability space ():

Now we take the ensemble average of (7) and apply Definition (6); by employing a closure approximation, variously known as the large-eddy diffusivity (LED) closure [35] or the weak approximation [36], an equation of can be obtainedwhere the eddy diffusivity and effective velocity areAnd Green’s function satisfies the deterministic partial differential equationsubject to the homogeneous initial (and boundary) conditions corresponding to their (possibly inhomogeneous) counterparts for the raw PDF (7).

#### 3. Model Validation

In this section, we validate our PDE model (8a), (8b), (8c), and (8d) by computing the mean fault detection and compare it with historical data and two other methods, namely, the classic deterministic SRGMs (generalised NHPP model) and the popular stochastic SRGMs (white-noise model). As shown in Table 1, four sets of data are used: the first one (DS1) is from WebERP system [37]; the second (DS2) originated from the open source project management software [38]; and the third (DS3) and fourth (DS4) are from the first and fourth product releases of the software products at Tandem Computers [32]. Among those four sets of data, the two UDP/TCP-based systems (DS1 and DS2) are more likely to be affected by environmental factors, whereas the TCP-based software development at Tandem Computers Inc. takes place in a more enclosed/stable environment. UDP is a connectionless-oriented transport protocol and will introduce larger correlation strength for system based on it, while TCP is connection-oriented and will generate smaller correlation strength. We encapsulate such difference in terms of noise variance, for example, larger for UDP/TCP-based systems (DS1 and DS2). Moreover, DS1, DS2, and DS3 are from the imperfect debugging process, while DS4 is from the perfect debugging process; we demonstrate this diverse with various configurations of the subsequent presentation . The testing time of DS1, DS2, DS3, and DS4 is (months), (weeks), (weeks), and (weeks), respectively. For detailed information, please refer to the data values tabulated in the Appendix.