Discrete Dynamics in Nature and Society

Volume 2013 (2013), Article ID 537675, 7 pages

http://dx.doi.org/10.1155/2013/537675

## An Inventory Controlled Supply Chain Model Based on Improved BP Neural Network

Research Center of Cluster and Enterprise Development, School of Business Administration, Jiangxi University of Finance & Economics, Nanchang 330013, China

Received 22 June 2013; Revised 8 September 2013; Accepted 10 October 2013

Academic Editor: Zhigang Jiang

Copyright © 2013 Wei He. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Inventory control is a key factor for reducing supply chain cost and increasing customer satisfaction. However, prediction of inventory level is a challenging task for managers. As one of the widely used techniques for inventory control, standard BP neural network has such problems as low convergence rate and poor prediction accuracy. Aiming at these problems, a new fast convergent BP neural network model for predicting inventory level is developed in this paper. By adding an error offset, this paper deduces the new chain propagation rule and the new weight formula. This paper also applies the improved BP neural network model to predict the inventory level of an automotive parts company. The results show that the improved algorithm not only significantly exceeds the standard algorithm but also outperforms some other improved BP algorithms both on convergence rate and prediction accuracy.

#### 1. Introduction

Inventory control is one of the key topics for supply chain management. Usually inventory takes the form of raw material, work in process (WIP) products, semifinished products, or finished products. Inventory cost is the main cost for supply chain management. A drop of just several percentage points of inventory cost can greatly increase the profits of the whole supply chain. In addition, sound inventory level can prevent shortage of material, maintain the continuity of the production process, and quickly satisfy customers' demand. Thereby, exploring the optimal inventory level is very necessary and valuable for supply chain management.

To date, the following inventory control problems need to be addressed [1, 2].(1)There are highly nonlinear models which are hard to process.(2)There are qualitative indicators which are hard to deal with.(3)The unchangeable indicators of inventory control lack self-adaptation.(4)Information of inventory control models is always indirect and the collection of information is time-consuming and of low efficiency.(5)Inventory control models always ignore the influence of uncertain factors, such as lead time, transportation conditions, and change of demand.

Considering the above problems, traditional inventory control theory is hard to meet the requirement posed by the new environment. Thanks to the uncertain feature of inventory control and the strengths of neural network in model prediction, this paper chooses to use BP neural network to establish inventory model and predict inventory level.

BP neural network is a kind of nonlinear feed forward network which has good nonlinear mapping ability. Theories have proved that BP network can approach any nonlinear mapping relationship given enough input and hidden layers while there is no necessity to establish a mathematical model. Furthermore, by learning and training, BP network can store information systematically in weight matrix . In doing so, it indicates that BP network can memorize the characteristics of inventory information and at the same time can adapt to the changes of inventory environment. In view of the features of BP neural network, it has great advantages in classification and prediction.

However, it is acknowledged that BP neural network also has such problems as slow convergence and easily converging to local minimum when forecasting. Considering the shortcomings of standard BP algorithm, this paper proposes a new fast convergent BP neural network model for predicting inventory level. By adding an error offset, this paper deduces the new chain propagation rule and the updated weight formula. The application of the improved BP neural network model to predict the inventory level of an automotive parts company shows that the improved algorithm significantly outperforms the standard algorithm and some other improved BP algorithms both on convergence rate and prediction accuracy.

This paper proceeds as follows: Section 2 has a wide review of related literature. Based on the standard BP neural network, Section 3 introduces an improved BP neural network. Section 4 applies the improved BP algorithm to predict the inventory level of an automotive parts company. Section 5 draws some conclusions according to the results.

#### 2. Literature Review

Recently, more and more scholars have applied neural network technique to inventory control. Bansal et al. used a neural network-based data mining technique to solve the problem of inventory of a large medical distribution company [3]. Based on the neural network model described by them, a prototype was conceived with data from a large decentralized organization. The prototype was successful in reducing the total level of inventory by 50% in the organization, while maintaining the same level of probability that a particular customer's demand would be satisfied. Shanmugasundaram et al. [4] discussed the use of neural network-based data mining and knowledge discovery techniques to optimize inventory levels in a large medical distribution company [4]. They identified the strategic data mining techniques used to address the problem of estimating the future sales of medical products using past sales data and used recurrent neural networks to predict future sales. Reyes-Aldasoro et al. adopted neural network technique to create a hybrid framework that could be utilized for analysis, modeling, and forecasting purposes [5]. The framework combined two existing approaches and introduced a new associated cost parameter that served as a surrogate for customer satisfaction. Hong et al. developed an online neural network controller that optimized a three-stage supply chain. With the inventory data feedback from an RFID system, the neural network controller minimized the total cost of the supply chain rapidly while satisfying a target order fulfillment ratio [6]. Some of these studies further proved that the neural network technique exceeded the traditional statistical technique in forecasting inventory level [7]. In fact, comparing with traditional prediction methods, neural network has its own unique advantages in processing prediction problems such as high fault tolerance, fast prediction speed, avoidance of description of complex relation between characteristic factors and object, strong adaptation, and good uncertain information processing ability [8–10].

Although there are various neural network models, BP neural network is the most widely used model because of its simple structure and strong ability to learn. In fact, it has been widely used in inventory control. Zhang et al. used the reinforcement learning technique and the BP neural network to propose a new adaptive inventory control method for supply chain management [11]. Wang proposed a neural network-based classification approach to inventory risk level of spare parts [12]. The BP algorithm for training a neural network is used to decide the weights to connections in the model. Mansur and Kuncoro cooperated to use the market basket analysis (MBA) and artificial neural network (ANN) Back propagation to predict inventory level [13]. In addition, ANN Back propagation is used to predict product inventories requirements/needs for each product. Huang et al. applied back-propagation network (BPN) to evaluate the criticality class (I, II, III, and IV) of spare parts [14]. They found that the proposed BPN could successfully decrease inventory holding costs by modifying the unreasonable target service level setting which was decided by the criticality class.

The BP neural network we mentioned above is referring to the standard BP neural network. The standard BP neural network is based on the Widrow-Hoff rule and uses the gradient descent method to transfer the mapping of a set of inputs to its correct output into nonlinear optimal problems. However, the standard BP algorithm has inherent disadvantages such as slow convergence, problem of converging to local minimum, complication of system, and random network structure selection [15, 16].

Aiming at the weaknesses of standard BP neural network, scholars have made further studies and proposed different improved BP neural network models [17–25]. The improvements of BP neural network mainly incorporate two perspectives: the direct improvements on BP neural network and the improvements based on the proposals of other new theories. Usually the first perspective includes adding the momentum factor [17], varying the learning rate dynamically [18], and introducing resilient back propagation (RPROP) [19]. The second perspective usually includes the introduction of simulated annealing genetic algorithm [24] and introduction of multiple extended Kalman algorithm [25]. All of these improved BP algorithms can reduce the training time to some degree and increase the prediction accuracy. Some have even applied the improved BP algorithms to forecasting inventory level and inventory control [26, 27].

By adding an error offset to the error function, this paper puts forward a direct improvement on standard BP neural network. Based on a dataset of an automotive parts company, it proves that the improved BP algorithm not only exceeds the standard BP algorithm both on convergence rate and prediction accuracy but also outperforms some other improved BP neural networks.

#### 3. Improvement of BP Neural Network

##### 3.1. Standard BP Neural Network

Back-propagation algorithm or BP algorithm, one of the most widely used algorithms in artificial neural network, is a kind of supervised learning algorithm. Its main purpose is to adjust weight matrix according to the squared error between the actual output and target output. The squared error is expressed as follows:

Here, denotes the th training sample, denotes the target output of the th training sample, and denotes the actual output of the th training sample. The weights to each neuron are revised according to the following delta rule:

Here, denotes the th layer neural network and denotes the weight on the connection from the th neuron in the th layer to the th neuron in the th layer. is expressed as follows:

Here, denotes the learning rate. By analyzing the above formula, we know that the key of BP algorithm is the calculation of .

Suppose that denotes the input of th neuron, denotes the output of th neuron, and denotes the output of th neuron. Then, , .

When the th neuron is the output node, we have

If theth neuron is not the output node, it must be the hidden node and we have

From the above analysis, we can know that the standard BP algorithm updates the weights of its output layer and hidden layer just according to the above formula. Regarded as a part of the weights, the update of bias is quite similar to that of weights so we will not give further details about its deduction.

##### 3.2. Improved BP Neural Network

To improve the convergence rate of standard BP algorithm, we propose a new algorithm, which can achieve the goal by adding an error offset.

The essence of BP algorithm is the forward propagation of data and backward propagation of errors. The weight value is revised according to the errors in back propagation. However, the convergence rate of standard BP algorithm is slow and often cannot satisfy the requirements when applied. Therefore, we propose a new method: adding an error offset in back propagation to greatly improve the convergence rate. The latter experiment illustrates that its effect is quite outstanding. Here, we redefine the squared error as follows: and is the error offset, and what follows next is our deduction of from the revised squared error. Consider

For the right-hand side of (7), the first half part is the same with that of standard BP algorithm. What we need to calculate is the second half part. If is the output node, then , . Consider

The new weight formula is

If th neuron is not the output node, then it must be the hidden node. To avoid confusion, we suppose that th is the output layer and we have

The new weight formula is

#### 4. Model Construction

This paper uses the dataset of an automotive parts company to train the improved BP neural network. As we know, nowadays automobiles are comprised of lots of parts. These parts are produced on the demand of automobile manufacturers and then are sent to assembly factories to form a complete product. In this way, the whole production process of an automobile exists in the form of a supply chain. To realize the highest overall efficiency, it needs cooperation of all the suppliers, manufacturers, wholesalers, and retailers. Inventory control is an important aspect which reflects such kind of cooperation. In the following part, this paper will use the improved BP neural network to forecast the inventory level of bearings—one of the components for an automobile.

##### 4.1. Factors Influencing Inventory Control and Selection of Sample

Usually accurate inventory level is the precondition for good inventory management. For inventory management, inventory controlling cost and customers’ service levels as well as inventory controlling quality are the main factors to estimate the inventory level. Therefore, in the design of inventory control system, we mainly use these factors to predict. They are described as follows [2].

*(1) Various Costs*. They are one of the main indicators to evaluate inventory control strategy. The costs mainly include all the expenses in product purchase and production as well as sales. For enterprises, analyzing inventory controlling cost can effectively reduce the overall cost of enterprises. However, inventory controlling costs include many aspects and these aspects can influence each other. Therefore, dividing inventory controlling cost in details and analyzing the accumulated data of business systems to find the main factors will be helpful for enterprises to make corresponding decisions and control all kinds of costs. The costs mainly include ordering cost, storage cost, transportation cost, and shortage cost.

*(2) Demand Level*. The purpose of inventory control is to best satisfy the demands. Therefore, demand is another important factor influencing inventory control. However, demand may be certain but also may be stochastic or seasonal. Demand level is positively proportional to inventory level.

*(3) Supply Level*. It refers to supply level of finished products of producers. It is positively proportional to inventory level.

*(4) Quantity of Substitutes*. It refers to the types of other parts which can substitute for the parts used. It is negatively proportional to inventory level.

*(5) Lead Time*. It refers to the period of time from sending the order to being ready for production. It includes the time for ordering, waiting time, preparatory time for suppliers to deliver goods, time on transportation, time for check and acceptance for warehouse entry, and time for preparation for use. It is positively proportional to inventory level.

*(6) Customer Service Level*. It refers to the possibility for enterprises to satisfy customers’ needs after customers propose the ordering requirements. It is negatively proportional to inventory level. The higher the customer service level goes, the lower the inventory level will be. In this case, we use 2 (very good), 1 (general), and 0 (poor) to represent the extent of the customer service level.

This paper chooses the historical data of factors which influence the safety inventory level and inventory data of bearing of an automotive parts production company in one of the middle provinces of China from March 2012 to March 2013 as a sample to train the improved BP neural network. We mainly choose 100 groups of the data to train the network and then check its prediction ability. The number of training samples cannot be too small; otherwise, the network cannot learn enough which may result in low prediction ability. However, too large samples will lead to redundancy. At this time, the network will be overfitted. Therefore, this paper chooses 100 groups of data as input to train and predict and chooses inventory level as output to establish the BP neural network model. In this case, because the system is nonlinear, the initial value plays very important role in achieving local minimum. Therefore, the input sample needs to be normalized and the purpose is to make the big input values also fall in the range with large gradients of activation function.

Before network training, we normalized the training data according to and made them within (see Table 1).

##### 4.2. Network Variables

Any continuous function can be realized by a three-layer artificial neural network. Therefore, this paper adopts the three-layer BP neural network structure. When all information is input into the network, the information starts by being transmitted from input layer to hidden layer. With the work of activation function, the information is then transmitted to output layer. There are 9 input factors and the output is inventory level. The selection of variables of the network is as follows.

*(1) Input Layer.* The input layer includes 9 factors: storage cost (X1), ordering cost (X2), shortage cost (X3), transportation cost (X4), demand level (X5), supply level (X6), quantity of substitutes (X7), waiting time (X8), and service level (X9).

*(2) Hidden Layer.* Usually when there are one or two hidden layers, it has the best convergent attributes. If there is no hidden layer or there are too many hidden layers, the convergent effect is not so good. Theories have proved that networks which have deviations and at least one S-type hidden layer and one linear output layer can approach any nonlinear function. That is, a three-layer BP network with a hidden layer can approach any nonlinear function.

According to empirical formula , is the number of nodes of hidden layer and is the number of nodes of input layer. We suppose .

*(3) Output Layer.* The number of nodes of output layer is the number of system objects. We choose one node as the inventory level of March 2013 to be measured.

*(4) Selection of Initial Value and Threshold Value*. Because both of them are two random groups of value, we choose a random value between .

*(5) Selection of Expected Error and Number of Iterations.* We choose 10000 as the number of iterations and the expected error is 0.1.

#### 5. Training Process and Experimental Result

This paper uses the neural network tool package of MATLAB 7.6 to program the model for safety inventory level based on BP neural network. In the BP neural network model established in this paper, there are 9 inputs and the number of neurons is relatively large. We preliminarily set the training variables as follows: times of training are 10000, training target is 0.01, and learning rate is 0.1. The code and training result is as follows: net. trainParam. Epochs = 10000; net. trainParam. goal = 0.1; LP. lr = 0.1; net-train(net, P, T); after 1000 trainings, the training is finished. After network finishes training, the network gets tested. We use the data of March 2013 to test. The code of prediction is as follows: _test = [0.5 0.78 0.63 1 0.43 0.4 0.25 0.08 1]: Out = sim (net, _test);

By comparing Figures 1 and 2, we can clearly see that the convergence rate of the improved algorithm is significantly faster than that of standard algorithm. We select the data from February 1, 2013, to February 20, 2013, to test. The result is as follows.

From Table 2, we can know that the improved BP algorithm is significantly better than that of standard BP algorithm on convergence rate. In addition, we also compare our improved BP algorithm with some other improved BP algorithms. The result shows that our BP algorithm also outperforms the other two improved BP algorithms mentioned in the literature review on convergence rate.

As prediction accuracy is concerned, from Figure 3, we can know that our improved BP algorithm exceeds significantly the standard BP algorithm.

Suppose is the prediction set error. From Table 3 we can clearly see that our improved BP algorithm not only exceeds the standard BP algorithm but also outperforms the other two improved BP algorithms mentioned in the literature review on prediction effect.

#### 6. Conclusions

We conclude the following with the practical importance of our findings. First, this paper proposes a new, fast convergent BP algorithm and deduces new chain propagation rules of neural network by introducing an error offset. Secondly, this paper applies it to the prediction of inventory level of an automotive parts company and achieves good effect. From the experimental results, we can see that using neural network to predict inventory is effective. The improved BP algorithm not only significantly exceeds the standard algorithm both on convergence time and prediction effect but also outperforms some other improved BP algorithms on these two main indicators. In this sense, this paper provides a valuable reference for inventory control of supply chain. However, this paper also has limitations. There are still some problems that need to be solved such as how to decide the number of nodes of hidden layer and the optimization of whole structure of network. Apart from that, the introduction of the error offset is based on experiences. The theoretical explanation for it still needs to be further discussed. All these problems wait to be further explored in future research.

#### Acknowledgments

This work is supported by the NSFC (71361013 and 71163014) and The Education Department of Jiangxi Province Science and Technology Research Projects (11728).

#### References

- P. W. Balsmeier and W. J. Voisin, “Supply chain management: a time-based strategy,”
*Industrial Management*, vol. 38, no. 5, pp. 24–27, 1996. View at Google Scholar · View at Scopus - S. Minner, “Multiple-supplier inventory models in supply chain management: a review,”
*International Journal of Production Economics*, vol. 81-82, pp. 265–279, 2003. View at Publisher · View at Google Scholar · View at Scopus - K. Bansal, S. Vadhavkar, and A. Gupta, “Brief application description. A neural networks based forecasting techniques for inventory control applications,”
*Data Mining and Knowledge Discovery*, vol. 2, no. 1, pp. 97–102, 1998. View at Google Scholar · View at Scopus - J. Shanmugasundaram, M. V. N. Prasad, S. Vadhavkar, and A. Gupta, “Use of recurrent neural networks for strategic data mining of sales information,”
*MIT Sloan*4347-02; Eller College Working Paper 1029-05, 2002. View at Google Scholar - C. C. Reyes-Aldasoro, A. R. Ganguly, G. Lemus, and A. Gupta, “A hybrid model based on dynamic programming, neural networks, and surrogate value for inventory optimisation applications,”
*Journal of the Operational Research Society*, vol. 50, no. 1, pp. 85–94, 1999. View at Google Scholar · View at Scopus - S. R. Hong, S. T. Kim, and C. O. Kim, “Neural network controller with on-line inventory feedback data in RFID-enabled supply chain,”
*International Journal of Production Research*, vol. 48, no. 9, pp. 2613–2632, 2010. View at Publisher · View at Google Scholar · View at Scopus - F. Y. Partovi and M. Anandarajan, “Classifying inventory using an artificial neural network approach,”
*Computers and Industrial Engineering*, vol. 41, no. 4, pp. 389–404, 2002. View at Google Scholar · View at Scopus - J. Li, Y. Li, J. Xu, and J. Zhang, “Parallel training algorithm of BP neural networks,” in
*Proceedings of the 3rd World Congress on Intelligent Control and Automation*, vol. 2, pp. 872–876, July 2000. View at Scopus - D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representations by error propagation,” in
*Parallel Distributed Processing: Explorations in the Microstructure of Cognition*, D. E. Rumelhart and J. L. McClelland, Eds., vol. 1, chapter 8, MIT Press, Cambridge, Mass, USA, 1986. View at Google Scholar - N. Ampazis and S. J. Perantonis, “Two highly efficient second-order algorithms for training feedforward networks,”
*IEEE Transactions on Neural Networks*, vol. 13, no. 5, pp. 1064–1074, 2002. View at Publisher · View at Google Scholar · View at Scopus - K. Zhang, J. Xu, and J. Zhang, “A new adaptive inventory control method for supply chains with non-stationary demand,” in
*Proceedings of the 25th Control and Decision Conference (CCDC '13)*, pp. 1034–1038, Guiyang , China, May 2013. View at Publisher · View at Google Scholar - W. P. Wang, “A neural network model on the forecasting of inventory risk management of spare parts,” in
*Proceedings of the International Conference on Information Technology and Management Science (ICITMS '12)*, pp. 295–302, Springer, 2012. - A. Mansur and T. Kuncoro, “Product inventory predictions at small medium enterprise using market basket analysis approach-neural networks,”
*Procedia Economics and Finance*, vol. 4, pp. 312–320, 2012. View at Publisher · View at Google Scholar - Y. Huang, D. X. Sun, G. P. Xing, and H. Chang, “Criticality evaluation for spare parts based on BP neural network,” in
*Proceedings of the International Conference on Artificial Intelligence and Computational Intelligence (AICI '10)*, vol. 1, pp. 204–206, October 2010. View at Publisher · View at Google Scholar · View at Scopus - Z. Zheng, “Review on development of BP neural network,”
*Shanxi Electronic Technology*, no. 2, pp. 90–92, 2008. View at Google Scholar - H. Yu, W. Q. Wu, and L. Cao, “Improved BP algorithm and its application,”
*Computer Knowledge and Technology*, vol. 19, no. 5, pp. 5256–5258, 2009. View at Google Scholar - D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,”
*Nature*, vol. 323, no. 6088, pp. 533–536, 1986. View at Google Scholar · View at Scopus - T. P. Vogl, J. K. Mangis, A. K. Rigler, W. T. Zink, and D. L. Alkon, “Accelerating the convergence of the back-propagation method,”
*Biological Cybernetics*, vol. 59, no. 4-5, pp. 257–263, 1988. View at Publisher · View at Google Scholar · View at Scopus - M. Riedmiller and H. Braun, “Direct adaptive method for faster backpropagation learning: the RPROP Algorithm,” in
*Proceedings of the IEEE International Conference on Neural Networks (ICNN '93)*, vol. 1, pp. 586–591, San Francisco, Calif, USA, April 1993. View at Scopus - C. Charalambous, “Conjugate gradient algorithm for efficient training of artificial neural networks,”
*IEE Proceedings G*, vol. 139, no. 3, pp. 301–310, 1992. View at Google Scholar · View at Scopus - M. F. Møller, “A scaled conjugate gradient algorithm for fast supervised learning,”
*Neural Networks*, vol. 6, no. 4, pp. 525–533, 1993. View at Google Scholar · View at Scopus - F. D. Foresee and M. T. Hagan, “Gauss-Newton approximation to Bayesian learning,” in
*Proceedings of the IEEE International Conference on Neural Networks*, pp. 1930–1935, June 1997. View at Scopus - R. Battiti, “First and second order methods for learning: between steepest descent and newton's method,”
*Neural Computation*, vol. 4, no. 2, pp. 141–166, 1992. View at Publisher · View at Google Scholar - Y. Gao, “Study on optimization algorithm of BP neural network,”
*Computer Knowledge and Technology*, vol. 29, no. 5, pp. 8248–8249, 2009. View at Google Scholar - S. Shah and F. Palmieri, “MEKA-A fast, local algorithm for training feed forward neural networks,” in
*Proceedings of the International Joint Conference on Neural Networks*, pp. 41–46, June 1990. View at Scopus - X. P. Wang, Y. Shi, J. B. Ruan, and H. Y. Shang, “Study on the inventory forecasting in supply chains based on rough set theory and improved BP neural network,” in
*Advances in Intelligent Decision Technologies Smart Innovation, Systems and Technologies*, vol. 4, pp. 215–225, Springer, Berlin, Germany, 2010. View at Google Scholar - H. Lican, Z. Yuhong, X. Xin, and F. Fan, “Prediction of investment on inventory clearance based on improved BP neural network,” in
*Proceedings of the 1st International Conference on Networking and Distributed Computing (ICNDC '10)*, pp. 73–75, Hangzhou, China, October 2010. View at Publisher · View at Google Scholar · View at Scopus