Mathematical Problems in Engineering

Volume 2015 (2015), Article ID 481919, 6 pages

http://dx.doi.org/10.1155/2015/481919

## Research and Application of Improved AGP Algorithm for Structural Optimization Based on Feedforward Neural Networks

Computer and Information Engineering College, Guangxi Teachers Education University, Nanning 530023, China

Received 31 May 2014; Revised 18 September 2014; Accepted 7 October 2014

Academic Editor: Yiu-ming Cheung

Copyright © 2015 Ruliang Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The adaptive growing and pruning algorithm (AGP) has been improved, and the network pruning is based on the sigmoidal activation value of the node and all the weights of its outgoing connections. The nodes are pruned directly, but those nodes that have internal relation are not removed. The network growing is based on the idea of variance. We directly copy those nodes with high correlation. An improved AGP algorithm (IAGP) is proposed. And it improves the network performance and efficiency. The simulation results show that, compared with the AGP algorithm, the improved method (IAGP) can quickly and accurately predict traffic capacity.

#### 1. Introduction

Artificial neural networks have been widely applied in data mining, web mining, multimedia data processing, and bioinformatics [1]. The success of the artificial neural network is largely determined by its structure. The optimization of network structure is usually a trial-and-error process by growing or pruning method. However, many algorithms employ the hybrid algorithm to optimize network structure [2], such as AGP.

Generally speaking, the method of optimizing neural network structure includes growing method, pruning method, and the hybrid algorithm of the two strategies basically. The first is also known as a constructive method. Based on the minimum network, adding new hidden units trains the network by data [3]. We know that the grow-when-required (GWR) algorithm of Marsland adds the hidden nodes based on the network performance requirements [4]. The disadvantages of growing methods are that the initial small network can be easily overfitting and trapped in local minima and it may also increase the training time [1].

The second method is called the destructive method, which deletes the unimportant nodes or weights in the original large network [5]. Lauret et al. put forward the extended Fourier amplitude sensitivity algorithms to prune the hidden neurons. This algorithm quantifies the correlation of neurons in the hidden layer and sorts it. And finally it iterates the most favorable neurons by using quantitative information and prunes the notes that rank late. By this method, however, the output and input of the network hidden neurons are independent [6]. When there are dependencies between them, this method is invalid. Xu and Ho describe a UB-OBS pruning algorithm that prunes the hidden units of feedforward neural network. It uses orthogonal decomposition method to determine the hidden node that needs pruning and then recalculate the weights of the remaining nodes to maintain the network performance. But the biggest drawback of pruning method is to determine the size of the initial network [7].

There will be more problems only by growing or pruning algorithms, so the hybrid algorithm of growing and pruning algorithms is proposed. It does not need to determine the initial network and does not carry out overfitting [8]. And it can be complementarily the two kinds of algorithms by enlarging their respective advantages and narrowing disadvantages [1]. AGP is a kind of growing pruning hybrid algorithm. In the structural design, the algorithm is based on the sigmoidal activation value of the node to adjust the neural network by pruning the little value neurons, merging similar neurons, and increasing the corresponding neurons, so it can adjust the structure of network self-adaptively [9]. In recent decades, the structure optimization algorithm of neural network has received extensive attention [10–17]. The algorithm could be applied to nonlinear function approximation problems, but it has many times of iteration, complex calculation and needs to set threshold and adjust the parameters frequently.

Therefore, the feedforward neural network structure optimization algorithm still has much room for improvement. So IAGP was presented in this paper. Network pruning is based on the sigmoidal activation value of the node and all the weights of its outgoing connections. Network growing is based on the idea of variance. We directly copy those nodes with high correlation. It can rapidly, accurately, and self-adaptively optimize network structure.

Finally, it is applied to nonlinear function approximation and prediction of traffic capacity, and simulation results show the effectiveness of the improved AGP algorithm.

#### 2. IAGP

##### 2.1. AGP

This algorithm can solve the problem of adjusting the structure of network self-adaptively. First, it creates an initial feedforward neural network and then trains network by using BP algorithm until it reaches the target error. Otherwise, it calculates the sigmoidal activation value of the node to prune all the insignificant neurons and combines a large number of neurons to achieve the purpose of simplifying the network. Then after a certain amount of training, if it still does not reach the target accuracy, we will increase node based on the idea of cell division. It ensures that the growing node is the best. At the same time, it ensures the correlation between the two nodes. Then we retrain the network. If classification accuracy of the network falls below an acceptable level, then stop training; otherwise, continue training [9].

##### 2.2. IAGP

In order to improve network performance and efficiency, IAGP was presented in this paper. First, the algorithm creates an initial network based on the actual problem. Here we assume that the initial network is a fully connected multilayer feedforward neural network with layers, as shown in Figure 1.