Abstract

Our purpose in this paper is to present a better method of parametric estimation for a bivariate nonlinear regression model, which takes the performance indicator of rubber aging as the dependent variable and time and temperature as the independent variables. We point out that the commonly used two-step method (TSM), which splits the model and estimate parameters separately, has limitation. Instead, we apply the Marquardt’s method (MM) to implement parametric estimation directly for the model and compare these two methods of parametric estimation by random simulation. Our results show that MM has better effect of data fitting, more reasonable parametric estimates, and smaller prediction error compared with TSM.

1. Introduction

In the study of fatigue lifetime evaluation of rubber materials, accelerated aging test is widely used as an effective procedure for obtaining the data on performance indicators , aging time , and aging temperature . In order to investigate the relationships among them, Dakin [1, 2] proposed the kinetic equation for aging; that is, where is the performance indicator of rubber, is the aging time, is an aging rate constant depending on the temperature , is a constant, and is a constant in .

Mott and Roland [3] and Wise et al. [4] interpreted that the in (1) can be expressed in the Arrhenius form. In this paper, we also assume the convention that the for rubber can be described by the Arrhenius type where is aging temperature and and are constants.

By (2) and (1), we obtain the model which is called the bivariate nonlinear regression model in this paper. Here, , , , and are model parameters. In the past, one (see, e.g., [57]) usually split (3) into (1) and (2) to estimate the parameters in (3).

The constant is determined by successive approximation method, which is to minimize (4) to two decimal places: where and denote the experimental measurements and predicted values of the performance indicators of rubber when the aging temperature index is and the experiment serial number is , respectively. When is assigned a value, (1) can be converted into the following linear form through logarithm transformation: where , , , and .

The values of and are determined by the least squares method:

The is given by , where is the maximum index of aging temperature.

The is given by and used as the known value in (2). Similarly, (2) can be converted into the following linear form through logarithm transformation: where , , , and .

The values of and are also determined by the least squares method: The estimated values of and are given by and .

At last, the final estimates of the parameters are substituted into (3) to form the regression forecast model.

However, the above-mentioned TSM has the following limitation.

First, the estimates of the parameters in (1) and (2), obtained by the logarithm regression method, are generally not the least squares solution of the original variables [8].

Second, substituting the estimates of the parameters in (1) into (2) may lead to large errors. This is because the estimates of the parameters and in (3) highly rely on the precision of , if has a small change that will lead to considerable change of the values of and . Furthermore, TSM is a tedious calculation method.

Finally, the parameter in (3) is the average of , whose goodness needs verifying.

Regarding the limitation above, the purpose of this paper is to adopt MM to estimate the four parameters in (3).

2. Marquardt’s Method

The general form of the nonlinear regression model is where is a known nonlinear function, are a set of independent variables, are a set of unknown parameters to be estimated, and is the random error. If and are observed for times, sets of observations , can be obtained.

Substituting the set of observations of the independent variables into the model (9), we see that

Since are known values, we deduce that is a function of . For a given initial value , we expand using Taylor’s formula at and omit the quadratic and above terms. The expansion is as follows: All the numbers in (11) except the parameters are known. It is clear that the right-hand side of (11) is a linear function of . Thus, we apply the least squares method to (11) and set where is called the damping factor. When , this method of linearization becomes the Gauss-Newton method [9] which is a special case of MM. Even worse, the selection of initial values of iteration for the Gauss-Newton method is harder than that for MM.

In order to minimize , the first partial derivatives of with respect to should be zero; that is, The equality (13) can be turned into the following form: where Thus

Obviously, this solution depends on the initial values , and . If all the absolute values for are quite small, the estimation will be considered successful. On the contrary, if each is rather large, we will insert calculated in the previous step into (14) as a new . Then we compute the updated values of from (14) and insert them back to (14) as the new . Iterate this process until all the absolute values can be ignored. Since are fixed in (14), the larger the value of is, the smaller the absolute values of are. Therefore, the value of should not be too large; otherwise the times of iteration will be increased. The boundary for selecting the value of depends on whether the residual sum of squares is decreasing.

2.1. Steps for Calculating the Parameters in (3)

There are two independent variables (aging time and aging temperature ) and four unknown parameters (, , , and ) in (3). The steps for solving the nonlinear equations for the four parameters are as follows.(a)Calculate the partial derivatives of in (3) with respect to , , , and , respectively; and we obtain (b)Select the initial iteration values of the parameters; that is, . Whether the selection of initial values is appropriate will determine the amount of calculation and the convergence of iteration process. This paper uses TSM to estimate the initial values of the parameters according to the aging data in the related paper [6]. These values are also considered the initial values of the parameters in the process of random simulation in Section 3.(c)Insert the sets of observations (, ) and the partial derivatives in (17) and into (15), and obtain each of the coefficient values in (14). For the first iteration, set the initial value; that is, , calculate from (14) the in (16), and then insert the estimate of into the original expression (12) to calculate the residual sum of squares: . Obviously, the smaller the value of is, the better it is.(d)For the second iteration, set , and for .(i)First, set ; that is, , and then obtain the new values . Next, calculate the new residual sum of squares: .(ii)If , the second iteration is done. But if , set ; that is, , recalculate , and recalculate the residual sum of squares .(iii)If , the second iteration is done. But if , set ; that is, , and recalculate and .(iv)If , the second iteration is done. But if , set ; that is, , and recalculate and . If , the second iteration is done. Keep on increasing the values of until , and the fourth step is finished.(e)For the third iteration, take the terminal values at the second iteration of , , and as the new , , and , respectively. Repeat the whole process of the second iteration until a new .(f)Iterate the procedure as in processes (d) and (e) until (tolerance) is satisfied. But we have to notice that the value of should not be too large at this time; otherwise would hold even though the actual iteration failed.

3. Random Simulation and Result Analysis

3.1. Random Simulation and Data Processing

Random simulation is a method which uses random numbers to conduct computer simulation. The sample observations obtained by random sampling were utilized to estimate the parameters of the models (see, e.g., [10, 11]).

This paper uses MatLab programming to simulate the data. Follow the steps below.(1)Determine the model and initial values of the parameters.(2)Select a constant randomly to compute .(3)Assume that follows the uniform distribution on the interval () and generates the random numbers from () as . The number of times of simulations is .(4)After obtaining all the simulated values, insert them into the model and add to by a random number following the uniform distribution on (−0.1, 0.1). We can simulate sets of subsamples eventually.

In this paper, , , and . The initial values of the parameters are , , , and . Then we can simulate 5 sets of data (each with 20 numbers) in Table 1.

According to the simulated data, we first use MatLab programming to figure out that the approximate value of is 0.80 and then calculate the values of , , , and by TSM using SPSS software. The results are displayed in Table 2.

Then we estimate the parameters in the bivariate nonlinear model (3) by MM using SPSS software (the initial values of the parameters here are the same as those used in random simulation). The results are displayed in Table 3.

3.2. Result Analysis

(1) In regression analysis, the coefficient of determination = 1 − (residual sum of squares)/(total sum of squares of deviations) is a statistic that measures the goodness of fit of the model under consideration. Specifically, the coefficient of determination is a statistical measure of how well the regression line fits the real data points. The closer to 1 the is, the closer the points of practical observations to the sample line and the better the goodness of fit of the model are. From Tables 2 and 3, it can be seen that the of MM is larger than that of TSM, which indicates that the prediction model of MM is more suitable for fitting the simulated data.

(2) Comparing the estimates of the parameters obtained by the two methods, we can easily find out that the estimates of the parameters of MM are closer to the initial values, which indicates that using MM to estimate the parameters in the model is more reasonable.

(3) We compare their residual sum of squares. The residual sum of squares for MM is 0.325, and that for TSM is 0.641. The former is only half of the latter. Obviously, the prediction error of the model resulting from using MM is smaller, and the precision of its fitted equation is higher.

4. Conclusion

In this paper, we demonstrate that the MM is more suitable for estimating the parameters of the aging lifetime model by the theoretical analysis and random simulation. Our method not only avoids a plenty of tedious calculation in TSM but also adds the damping factor, which loosens the limitation of selecting the initial values. Furthermore, compared with TSM, MM greatly decreases the fitting error between the predicted values and the practical observed values, and we obtain the best-fit parameters. In addition, the model estimated by MM has higher fitting precision than that by TSM.

We note that the parametric estimation in this paper can also be used in the prediction of lifetime of other materials, such as composite materials (see [12]).

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.