Abstract

Storage reliability is an important index of ammunition product quality. It is the core guarantee for the safe use of ammunition and the completion of tasks. In this paper, we develop a prediction model of ammunition storage reliability in the natural storage state where the main affecting factors of ammunition reliability include temperature, humidity, and storage period. A new improved algorithm based on three-stage ant colony optimization (IACO) and BP neural network algorithm is proposed to predict ammunition failure numbers. The reliability of ammunition storage is obtained indirectly by failure numbers. The improved three-stage pheromone updating strategies solve two problems of ant colony algorithm: local minimum and slow convergence. Aiming at the incompleteness of field data, “zero failure” data pretreatment, “inverted hanging” data pretreatment, normalization of data, and small sample data augmentation are carried out. A homogenization sampling method is proposed to extract training and testing samples. Experimental results show that IACO-BP algorithm has better accuracy and stability in ammunition storage reliability prediction than BP network, PSO-BP, and ACO-BP algorithm.

1. Introduction

Reliability is the primary quality index of military products. On the battlefield, the unreliable ammunition products will lead to the failure of military tasks or endanger the lives of our soldiers. In the weapon system, the reliability is the core of the design, development, and maintenance of ammunition products. Since most of the life cycle of ammunition products is the storage period, the storage reliability of ammunition directly affects the field reliability of ammunition. Quantitative prediction of ammunition products storage reliability is the key link of ammunition reliability engineering.

In recent years, scholars have carried out extensive research on reliability in their respective fields. Many models and methods are devised to predict and evaluate the reliability of different products and systems (see [16]). On the reliability of ammunition storage, the traditional methods are based on natural storage condition data and accelerated testing data, respectively. The reliability of ammunition storage is predicted by mathematical statistical methods. Based on natural storage condition data, a Poisson reliability mathematical model is established in [7] to predict ammunition storage life. Binary Logic Regression (BLR) model is obtained in [8] to evaluate recent system failures in non-operational storage. Under the assumption of exponential distribution, a storage reliability prediction model based on time-truncated data is established in [9]. In [10], Gamma distribution is used to fit the temperature distribution in the missile during storage, and proportional risk model is created to predict the effect of temperature on product reliability. In order to forecast the residual storage life, second-order continuous-time homogeneous Markov model and stochastic filtering model are utilized in [11] and [12], respectively. In [13], an E-Bayes statistical model is proposed to predict storage life by using the initial failure number.

Based on the accelerated test data, the storage life of fuze is evaluated in [14] by applying step stress accelerated test. The ammunition storage reliability is predicted in [15] based on ammunition failure mechanism and accelerated life model. In [16], the storage reliability function expression of ammunition is established based on the state monitoring data of accelerated test. The Arrhenius acceleration algorithm is used in [17] to design a new model to estimate the acceleration factor. In [18], a prediction method combining accelerated degradation test with accelerated life test is proposed. Monte Carlo method is used to generate pseudo-failure life data under various stress levels to evaluate reliability and predict life of products. Since the traditional statistical methods usually need the original life distribution information in solving the reliability prediction problems, they are conservative in solving the uncertain life distribution, highly nonlinear problems, and small sample data problems.

Many scholars recently consider intelligent algorithms in reliability prediction. Specifically, a comprehensive prediction model which combines SVM with Ayers methods is given to predict the reliability of small samples in [19]. The wavelet neural network is established in [20] to predict the data block of hard disk storage system. Two kinds of missile storage reliability prediction models are designed and compared in [21]. Two models are BP network and RBF network. Artificial neural network equipment degradation model is proposed in [22, 23] to predict the remaining life of equipment. A particle swarm optimization model is employed in [2426] to solve the reliability optimization problem with uncertainties. The intelligent algorithms based on the original data keep the information of data to the greatest extent. It is noted that the intelligent prediction algorithms do not depend on the prior distribution of ammunition life. The exploration process of prior distribution of raw data is eliminated. The intelligent algorithms provide a new way to predict the reliability of ammunition storage. However, there are still many problems need to be solved when applying intelligent algorithms to reliability prediction, such as the local minimum, slow convergence speed, and high sensitivity of SVM to data missing.

In this paper, a prediction model of ammunition failure number is proposed. A new three-stage ACO and BP neural network is created in the model. This prediction model excavates the mathematical relationship between the storage temperature, humidity, and period of ammunition under natural conditions and the number of ammunition failure. The reliability of ammunition can be obtained indirectly by the failure number. In the aspect of sample selection, “zero failure” data pretreatment, “inverted hanging” data pretreatment, normalization of small sample data augmentation, and homogenization sampling are adopted to achieve sample integrity. Experimental results show that our IACO-BP out-performs BP, ACO-BP, and PSO-BP.

The rest of this paper is organized as follows. Section 2 introduces the basic theory of ammunition storage reliability. Section 3 presents IACO-BP algorithm and describes the implementation details. Section 4 demonstrates data pretreatment methods and training sample and testing sample extraction strategy. Section 5 shows the experimental results and compares algorithm performance with BP, PSO-BP, and ACO-BP. The conclusions are denoted in Section 6.

2. Basic Theory of Ammunition Storage Reliability

The storage reliability of ammunition is the ability of ammunition to fulfill the specified functions within the specified storage time and under the storage conditions. The storage reliability of ammunition is the probability of keeping the specified function. Here, , where is the storage time before ammunition failure and t is the required storage time.

According to the definition of reliability, we can obtain the following:where is the initial ammunition number and f(t) is the cumulative failure number of ammunitions from the initial time to the time t.

The storage of modern ammunitions is a complex system which composes of chemical, mechanical, and photoelectric materials. Ammunitions are easy to be affected by different environmental factors. For example, nonmetallic mold leads to the insulation ability of insulation materials decline, the charge is moisture to reduce the explosive force of ammunition, and plastic hardening results in self-ignition of ammunition. Researches show that temperature and humidity are the main two factors that affect ammunition reliability, and storage period is an important parameter of ammunition reliability.

3. IACO and BP Neural Network

3.1. Traditional Ant Colony Optimization Algorithm (ACO)

Ant Colony Optimization (ACO) proposed by Dorigo is a global strategy intelligent optimization algorithm, which simulates the foraging behaviors of ants in [27]. Ants that set out to find food always leave the secretion which is called pheromone on the path. The following ants adaptively decide the shortest path to the food position by the residual pheromone concentration. The overall cooperative behaviors of ant colony foraging constitute a positive feedback phenomenon. Ant colony algorithm simulates the optimization mechanism of information exchange and cooperation among individuals in this process. It searches the global optimal solution through information transmission and accumulation in the solution space.

The main steps of ant colony algorithm are described as follows.

Step 1 (setting the initial parameters value). The number of ants is set to Q. is n parameters to be optimized. Each parameter has M values in the range of values. These values form set . At the initial time, each element carries the same concentration pheromone that is recorded as . The maximum number of iterations is .

Step 2 (choosing paths according to strategy). Start the ants. Each ant starts from set and chooses the foraging path according to (2) route selection strategy.Equation (2) describes the probability that ant k selects j value of parameter i. Ants determine the next foraging position according to the probability maximum selection rule.

Step 3 (updating pheromone). According to selection strategy in Step 2, each ant selects a location element in each set . is the time of this process. When ants finish a cycle of parameter selection, pheromones are updated. Updating strategies are adopted as follows:where is pheromone volatilization coefficient; is pheromone residual degrees. where is the increment of the pheromone of ant k during the time interval .where C is a constant, and it represents the sum of the pheromones released by the ants after the completion of a cycle. indicates the total path length of ant k in this foraging cycle.

Step 4 (repeating step 2 to step 3). When all the ants select the same path or reach the maximum number of iterations , the algorithm ends.

3.2. Improved Ant Colony Optimization Algorithm (IACO)

In traditional ACO algorithm, the initial pheromone concentration of each location is the same value. These ants randomly choose the initial path to forage with equal probability strategy. Since the descendant ants choose the paths according to the cumulative pheromone concentration of the previous generation ant colony, it may appear that the number of ants on the non-global optimal paths is much more than that on the optimal path at the initial selection. This initial random selection strategy can increase the possibility of non-optimal paths selection. The algorithm is easy to fall into local optimum.

Here, we are focusing on revising the pheromone update strategy of traditional ant colony algorithm. In this paper, the traditional route selection process of ant colony is divided into three stages: pheromone pure increase stage, pheromone volatilization and accumulation stage, and pheromone doubling stage. The path selection strategies of three stages are shown as (6), (3), and (7).

The strategy of pheromone pure increase stage is formed as follows:where the formula for calculating is like (4).

The strategy of pheromone volatilization and accumulation stage is the same as the traditional ant colony routing strategy. The strategy is described in (3).

The strategy of pheromone doubling stage is expressed as follows:where parameter is the doubling factor of process pheromone.

The procedure of IACO algorithm is as shown in Figure 1.

The pheromones are only accumulated and not volatilized in the previous N1 iterations of algorithm. This stage is defined as the stage of pure pheromone addition. The total amount of pheromone increases during the time . Furthermore, this value is inversely proportional to the length of ants walking through the path. This means that the concentration of pheromones of shorter paths increases more during this process. The difference of cumulative pheromone concentration between the non-optimal paths and the optimal path is reduced. This strategy improves the possibility of optimal paths selection. Since the pheromone volatilization factor is eliminated in the pure increase stage of pheromone, the convergence speed of the algorithm slows down. In the following two stages, we adopt two strategies to accelerate the convergence of this algorithm. In the middle of the iterations, the traditional pheromone updating strategy is adopted. The strategy combines pheromone volatilization and accumulation. While speeding up the volatilization of pheromones, the possibility of non-optimal paths can be taken into account. In the last stage of the iteration, the process pheromone doubling factor is introduced when the number of iterations reaches N2. The growth rate of pheromone is further improved, and the search speed of the optimal solution is also accelerated. This stage is called the pheromone doubling stage.

3.3. BP Neural Network

BP neural network [28] is a multi-layer feedforward artificial neural network. This network simulates the cognitive process of human brain neurons to knowledge. BP algorithm updates the weights and thresholds of the network continuously by forward and backward information transmission and error back propagation. The network is trained several times by both positive and negative processes until the minimum error is satisfied. BP network has strong self-learning and fault tolerance properties. Figure 2 shows the topology structure of three layers BP network.

BP network fitness function can be described as follows:where m is the number of nodes in the input layer, h is the number of hidden layer nodes, and is the number of nodes in the output layer. P is the total number of training samples. and are the network weight from node i in the input layer to node j in the hidden layer and the network weight from node q in the hidden layer to node k in the output layer, respectively. and are the threshold of node j in the hidden layer and the threshold of node k in the output layer, respectively. is the network output of training sample . is the expected value of sample . BP network adjusts the error function through bidirectional training to meet the error requirements, determines the network parameters, and obtains a stable network structure.

3.4. IACO-BP Algorithm

When BP neural network is used to predict, the initial weights and thresholds of each layer are given randomly. This method brings a great fluctuation in forecasting the same data. IACO-BP algorithm maps the optimal solution generated by IACO algorithm to the weights and thresholds of BP neural network. This strategy solves largely the randomness of initial weights and thresholds of BP network. IACO-BP algorithm improves effectively the accuracy and stability of BP network prediction. The specific steps of the algorithm are demonstrated as follows.

Step 1 (normalizing the sample data). where and are the maximum value and the minimum value in the sample data, respectively. is the ith input variable value. The range of is in the interval .

Step 2 (establishing three-layer network topology ). Determine the value of m, h, .

Step 3 (initializing ant colony parameters). Set the parameters: M, Q, , C, , , , and dimension of BP neural network parameters.

Step 4 (starting ants). Each ant starts from a location element in the set to find the shortest path. The pheromone is updated according to the improved ant colony three-stage pheromone update strategy.

Step 5. Repeat Step 4, until all ants choose the same path or reach the maximum number of iterations, and turn to Step 6.

Step 6. Map the global optimal value in Step 5 to the initial weights and thresholds of BP neural network. Output the predicted value of the network. Calculate the fitness function . Propagate the error back to the output layer and adjust the weights and thresholds. Repeat the above process, until the terminating condition of BP algorithm is satisfied or reaches BP maximum iteration .

Step 7. Reverse the test data and restore the form of the test data.where and are the maximum value and the minimum value of the network output data, respectively. is the ith output value.

The framework of IACO-BP algorithm is as shown in Algorithm 1.

Initialize the ant colony parameters;
 begin:
  while(ants_i<ants_num)
   _i chooses paths based on pheromone concentration;
     ants_i ++
       //Select different strategies in stages
     if(iterations_num<N1)
       pheromones according to pheromone pure increase strategy
     if(N1<iterations_num<N2)
       pheromones according to pheromone accumulation and
volatilization strategy
     if(iterations_num>N2)
       pheromones according to pheromone doubling strategy
     
   if (current_error>margin_error)
        ants_i=1;
goto begin;
   else
      Input ant colony result is the initial solution of neural network;
      neural network training;
      resulte = neural network training results;
   return resulte;

4. Data Collection and Pretreatment

4.1. Data Set

In this paper, the statistical data of ammunition failure numbers are selected as sample data at different temperatures, humidity, and storage period under natural storage conditions. The data are shown in Table 1. We employ ammunition failure data to train ammunition failure prediction model. Ammunition reliability is indirectly predicted by ammunition failure numbers. Each sample capacity in data set is 10.

In Table 1, the unit of temperature is the international unit, Kelvin. It is represented by the symbol K. Relative humidity RH% record is used to indicate the percentage of saturated water vapor content in the air under the same condition.

4.2. Data Pretreatment

The experimental samples collected at the scene usually contain samples with “zero failure” and “inverted hanging” samples. These samples affect the accuracy of ammunition reliability prediction. It is necessary to pre-treat the data before the simulation experiments. Bayes estimation method is applied to treat “Zero failure” and “inverted hanging” data. Method of artificially filling noise signal is used to augment small sample size. Homogenization sampling is introduced to extract training samples and test samples. Figure 3 shows the process of data pretreatment and extraction.

(1) “Zero Failure” Data Pretreatment. In the random sampling of small sample ammunition life test, it may appear that all the samples are valid. This means that the ammunition failure number is zero. In reliability data prediction, since zero failure data provides less effective information, it needs to be pretreated. Aiming at the “zero failure” problem of samples No. 1, No. 9, and No. 29 in the data set, the increment function of reliability p is selected as the prior distribution of p, that is, . The posterior distribution of p is calculated based on Bayes method. Zero failure data are treated.

When zero failure occurs, the likelihood function of p is . By Bayes estimation method, the reliability is obtained under the condition of square loss, as shown in (11). The zero failure is transformed into , as described in (12).

After “zero failure” processing, the failure numbers of samples No. 1, No. 9, and No. 29 in the data set are all converted to 0.714286.

(2) “Inverted Hanging” Data Pretreatment. Theoretically, under the same storage conditions, the number of ammunition failures with longer storage time is relatively large. During the same storage time, the number of ammunition failures with poor storage conditions is relatively large. The failure numbers of samples No.14 and No.15 and samples No.26 and No.27 are arranged inversely with the storage time, that is, the “inverted hanging” phenomenon. It needs to be “inverted hanging” treated.

Two sets of inverted data in the sample can be used to modify to time by selecting time failure data as the reference data in the corresponding period . Since both inverted samples are not at the end of the experiment at time, the uniform distribution is taken as a prior distribution of . is revised to .where is the unfailing number in the sample. is the failure probability corrected after times. The corrected number of failures is . After “inverted hanging” pretreatment, the failure number of samples No.15 and No.27 in the data set is converted to 2. The “inverted hanging” phenomenon is eliminated. The data after zero failure and inverted hanging pretreatment are shown in Table 2.

(3) Normalization of Data. Temperature, humidity, and storage period in the data set are corresponding to different quantity rigidity and quantity scale. It is necessary to normalize the data. The normalized data can eliminate the influence of variable dimension and improve the accuracy of evaluation. Normalization method is denoted as (9).

(4) Small Sample Data Augment. Since the particularity of ammunition samples, the problem of small sample size is usually encountered in the study of ammunition reliability prediction. In this paper, we add noise signals into the input signals after normalization. The method is applied to simulate the sample data and increase the sample size. The generalization ability of BP network is enhanced.

The number of samples is set to ns, and the sample k is recorded as . is the input signal vector for the sample k, and is the output signal of the sample k. The noise vector is added to the input signal. The output signal remains unchanged. The new sample becomes . Set . Firstly, three components of input signal vector are added with noise , respectively. Then, two of the three components are grouped together, respectively. They are added with noise . Finally, these three components are added with noise , respectively. 32 data sets are extended to 256 data sets. The increase of sample size provides data support for BP neural network to mine the potential nonlinear laws in ammunition sample data. The smoothness of network training curve and the stability of network structure are improved.

4.3. Homogenization Sampling

In practical application scenarios, data have aggregation phenomenon. Aggregation phenomenon refers that a certain type of data set is distributed in a specific location of the data set. During analyzing data, it may occur that the sample data cannot represent the overall data. In this paper, a homogenization sampling data extraction method is proposed. Training samples and test samples are selected in the whole range. The bias of sample characteristics caused by aggregation sampling is removed. Ensure that the sample can maximize the overall data characteristics.

The implementation of homogenization sampling is as follows: Assuming that there are N sets of sample data, X sets of data are constructed as training samples, and Y sets are constructed as test data sets. Every N/Y-1 (if N/Y is a decimal, take a whole downward) data is test data until it is countlessly desirable. The remaining N-Y data are used as training samples. The homogenization sampling rule is shown in Figure 4.

It is known that the data pretreated in Section 4.2 are homogenized. In N=256 group samples, Y=10 group samples are taken as test samples. According to the homogenization sampling rule, one sample is taken from every [N/Y]-1=24 groups as the test sample, and the rest groups are as the training sample. Table 3 demonstrates four ammunition storage model precisions. These precisions are obtained based on homogenization sampling data and non-homogenization sampling data, respectively.

Table 3 shows the model accuracy of ammunition storage reliability prediction based on homogenization sampling much higher than that based on non-homogenization sampling. All algorithms in this paper are modeled and analyzed on the basis of homogenization sampling.

5. The Simulation Experiments

In this section, we show the specific parameter settings and the results of the simulation experiment. The mean square error (MSE) and mean absolute percentage error (MAPE) of performance indicators are used to evaluate the prediction effect of the model. The IACO-BP model shows its implement performance in comparison with BP, PSO-BP, and ACO-BP models.where and are expected reliability and actual reliability. N is the total number of data used for reliability prediction and comparison.

5.1. Parameter Settings

In this paper, the storage temperature, humidity, and storage period of ammunition under natural storage are used as input variables of neural network to predict the number of ammunition failure. The storage reliability of ammunition is indirectly predicted by the number of ammunition failure. The number of input nodes in BP network is m=3, and the number of network output nodes is =1. According to empirical formula , the value range of hidden layer nodes is . c is the constant in the interval . In order to avoid the contingency of the result caused by the instability of the neural network, 10 experiments are conducted on the same number of hidden layer nodes. The value of MSE of the predicted and true values is observed. The experimental results of trial and error are shown in Table 4.

Table 4 indicates that the mean value of MSE is the smallest when the number of hidden layer nodes is 11. Therefore, the neural network topology in this paper is .

The activation function of the hidden layer of BP network is “tansig”, the activation function of the output layer is “purelin”, and the training function of the network is “trainlm”. The learning rate is set to 0.1. The expected error is . The weights and thresholds parameters dimension of BP neural network are 56. Each parameter is randomly assigned to 20 values in the internal . The number of ants is . Pheromone initial value is set to . The coefficient of pheromone volatilization is . The increment of pheromone is . The number of terminations of the pheromone pure increase stage is set to 200, the number of initial iterations of the pheromone doubling stage is set to 500, and the number of iterations between 200 and 500 is the stage of pheromone volatilization and accumulation. The doubling factor of process pheromone is . Maximum iteration of ant colony is set to .

The algorithm is coded by MATLAB 2016a. The failure numbers of ammunition storage are generated by BP neural network algorithm, ACO-BP algorithm, PSO-BP algorithm, and IACO-BP algorithm, respectively. Then the results are compared and analyzed.

5.2. Results Analysis

In order to evaluate performance of the four algorithms, 20 simulation tests are conducted on each algorithm to obtain the value of the prediction error evaluation index MSE and MAPE. The arithmetic mean values of MSE and MAPE in 20 tests are used to characterize the accuracy of four algorithms. The variance, standard deviation, and range of MSE and MAPE are applied to evaluate the stability of algorithms. The results are shown in Tables 5 and 6.

The mean value of MSE of IACO-BP algorithm is 1.2e-03, which is 92%, 95%, and 99% lower than that of ACO-BP, PSO-BP, and BP algorithm, respectively. The mean value of MAPE of IACO-BP algorithm is 2.1e+00, which is 66%, 79%, and 83% lower than that of ACO-BP, PSO-BP, and BP algorithm, respectively. It can be seen that, in the four intelligent algorithms, the IACO-BP algorithm has the highest accuracy. For IACO-BP algorithm, the variance of MSE is 4.6e-07, the mean standard deviation of MSE is 6.8e-04, and the range of MSE is 2.2e-03. Each stability index of IACO-BP is the minimum value of corresponding index in the four algorithms. Hence, IACO-BP algorithm has better stability. The MSE value of the BP algorithm fluctuates significantly higher than the other three optimized BP algorithms, as shown in Figure 5. IACO-BP algorithm is the best stability among these three intelligent BP optimization algorithms, as shown in Figure 6. Table 6 shows the statistical value of the number of iterations obtained from 10 random trainings of four networks. The maximum number of iterations in the networks is 10000, and the network accuracy is 0.001. As shown in Table 6, the 10 simulations of PSO-BP network fail to satisfy the accuracy requirement of the network during 10000 iterations and terminate the algorithm in the way of achieving the maximum number of iterations. The average number of iterations in IACO-BP network is 339.8, which is the smallest among these four algorithms. The MSE of iteration is 144.88, which is also the smallest value among these three algorithms which satisfy the iterative precision termination procedure.

Six failure data samples are randomly selected from the field records, and the failure numbers are predicted using by the four trained networks. The storage reliability of ammunition is indirectly predicted by using (1). The results are described in Table 7 and Figure 7. The indirect prediction of ammunition storage reliability model based on IACO-BP algorithm has the best fitting degree and the highest accuracy among the four algorithms.

6. Conclusion

In this paper, a prediction model of the ammunition storage reliability is considered under natural storage conditions. IACO-BP neural network algorithm is proposed to solve this problem. Reliability is indirectly predicted by the number of ammunition failures obtained by IACO-BP ammunition failure number prediction model. In order to improve the accuracy and stability of network prediction, the data pretreatment and algorithm are promoted. In data pretreatment aspect, the standardization and rationality of the data sample are derived by using the methods of “zero failure” pretreatment, “inverted hanging” pretreatment, small sample data augmentation, and homogenization sampling. In algorithm aspect, we improve a pheromone updating strategy from the traditional ant colony to a three-stage updating strategy of pheromone pure increment, volatilization and accumulation, and doubling. Compared with BP, PSO-BP, and ACO-BP model, the experimental results prove that IACO-BP model has great advantages in precision, stability, and iteration times.

Data Availability

The raw data used to support the findings of study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by Open Foundation of Science and Technology on Electro-Optical Information Security Control Laboratory (Grant no. 61421070104), Natural Science Foundation of Liaoning Province (Grant no. 20170540790), and Science and Technology Project of Educational Department of Liaoning Province (Grant no. LG201715).