Mathematical Problems in Engineering

Volume 2015, Article ID 623720, 18 pages

http://dx.doi.org/10.1155/2015/623720

## Non-Gaussian Hybrid Transfer Functions: Memorizing Mine Survivability Calculations

^{1}Institute of Systems Engineering, Faculty of Science, Jiangsu University, 301 Xuefu, Zhenjiang 212013, China^{2}Department of Computer Science, Faculty of Applied Science, Kumasi Polytechnic, P.O. Box 854, Kumasi, Ghana^{3}Computer Science and Technology, Suqian college, Jiangsu University, 399 South Huanghe, 223800, China^{4}Department of Mathematics and Statistics, School of Applied Science, Kumasi Polytechnic, P.O. Box 854, Kumasi, Ghana^{5}College of Finance and Economics, Jiangsu University, 301 Xuefu, Zhenjiang 212013, China

Received 14 July 2014; Revised 7 November 2014; Accepted 8 November 2014

Academic Editor: Valder Steffen Jr.

Copyright © 2015 Mary Opokua Ansong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Hybrid algorithms and models have received significant interest in recent years and are increasingly used to solve real-world problems. Different from existing methods in radial basis transfer function construction, this study proposes a novel nonlinear-weight hybrid algorithm involving the non-Gaussian type radial basis transfer functions. The speed and simplicity of the non-Gaussian type with the accuracy and simplicity of radial basis function are used to produce fast and accurate on-the-fly model for survivability of emergency mine rescue operations, that is, the survivability under all conditions is precalculated and used to train the neural network. The proposed hybrid uses genetic algorithm as a learning method which performs parameter optimization within an integrated analytic framework, to improve network efficiency. Finally, the network parameters including mean iteration, standard variation, standard deviation, convergent time, and optimized error are evaluated using the mean squared error. The results demonstrate that the hybrid model is able to reduce the computation complexity, increase the robustness and optimize its parameters. This novel hybrid model shows outstanding performance and is competitive over other existing models.

#### 1. Introduction

Hybrid algorithms are used in optimizing real-world implementations that is, it comes as the best optimization solution tends to have challenges in implementation cost, time, and so forth, that needs a solution by using another technique. Hybrid algorithms have received significant interest in recent years and are increasingly use to solve real-world problems. These hybrid algorithms or models include combination of two or more algorithms involving genetic algorithms (GA) [1], particle swarm optimization (PSO) [2], and other computational techniques such as, artificial intelligence or neural networks including but not limited to multilayer perceptrons (MLP) or sigmoid [3], radial basis functions (RBF) [4, 5], fuzzy systems [6] and simulation annealing [7].

An artificial neural networks (ANNs) are techniques of artificial intelligence (AI) that have the capability to learn from experiences, it is robust [8] and improves performance by adapting to the changes in the environment. The underlying advantage(s) of ANNs are the possibility of efficient operation of large amounts of data and its ability to generalize the outcome. This training algorithm, the ANNs, is largely used in applications involving classification or function approximation, and it has been proved that several classes of ANN are universal function approximations [9]. These include radial basis function (RBF) and multilayer perceptrons’ (MLPs) neural networks. Taking into consideration the great potential of these techniques, this paper aims to establish a hybrid model using multilayer perceptron network (MLP) also called sigmoid basis function (SBF) and a radial basis function (RBF) network-both feed-forward learning. The RBF and MLP networks are usually employed in the same kind of applications. Examples include the nonlinear mapping approximation and pattern recognition [10]; however their internal calculation structures are different. In the multilayer fully connected feed-forward networks, the nodal transfer function activation flows from the input layer through a hidden layer to the output layer [10]. This functional description process can be expressed as [11] with typical processing node, where is one of the inputs for processing, node , is the connection weight between node and node , is the bias for node , and is the output from node . Each neuron in one layer is connected in the forward direction to every nodal unit in the next layer. One disadvantage in using most feed-forward layered neural networks is the high degree of nonlinearity in the parameters. Learning must be based on nonlinear optimization techniques (i.e., back-propagation), and the parameter estimate may become trapped at a local minimum of the selected optimization criterion during the learning procedure.

Another option to such neural networks is to use the radial basis function (RBF) as a transfer function. There is a strong connection between the RBF and neural networks and it is reasonable to believe that a radial basis network (RBN) can offer approximation capabilities similar to other feed-forward, layered neural networks [12], provided that the hidden layer of the RBN is fixed appropriately. This belief is strongly supported by the theoretical results from the RBF method as a multidimensional interpolation technique [13]. A radial basis function neural network has an input, hidden, and output layers. The input layer is composed of an input vector . The hidden layer consists of RBF activation function as networks neuron. The net input to the RBF activation function is the vector distance between its weight and the input vector , multiplied by the bias . Detailed work has been done on advantages of both sigmoid and radial basis functions [14]. Radial functions are a special class of functions whose value increases or decreases in relation to the distance from a central point. There are different types of radial basis functions, but the most frequently used is the Gaussian function. It is well known that the MLP networks have been applied successfully in several difficult problems. MLP networks also work globally and the network outputs are decided by all the neurons [15]. Radial basis function (RBFs) act as local approximation networks and their outputs are determined by specified hidden units in certain local receptive fields. RBF networks are simpler than MLP networks, in spite of having more complex architectures and respond well to patterns that were not used for training from the point of generalization [15]. Comparing the properties of neural networks, fuzzy inference systems, the RBF had the advantages of easy design, stable and good generalization ability, good tolerance to input noise, and online learning ability. RBF networks are strongly recommended as an efficient and reliable way of designing dynamic systems [15].

An important issue in the RBF neural network applications is the network learning, that is, the need to optimize the adjustable parameters, the center vectors, the variances or the widths of the basis functions, and the linear output weights connecting the RBF hidden nodes to the output nodes and to determine the network structure or the number of RBF nodes [16]. Closely coupled are the determination of the network size and the adjustment of parameters on the continuous parameter space. In this wise evolutionary algorithms have been used to address this problem, nonetheless they are computationally very expensive in its implementation [16] which results in slow and premature convergence and this has attracted attention in literature. The center location and clustering techniques have been proposed [17]. An identical width can be set for all the basis functions if the input samples are uniformly distributed, otherwise a particular width has to be set for each individual basis function to reflect the input distribution [18]. Once the centers and the widths are determined, the linear output weights can be determined using Cholesky factorization, orthogonal least squares, or singular value decomposition [19]. In contrast to the conventional two-stage learning procedure, supervised learning methods aim to optimize all the network parameters [20]. Various techniques have been introduced to improve the network convergence and these include hybrid algorithms to improve the convergence; various techniques combine the gradient-based search for the nonlinear parameters (the widths and centers) of the RBF nodes and the least squares estimation of the linear output weights [18] and combing the merits of fuzzy and crisp clustering [21].

Supervised learning is thought to be superior to conventional two-stage approaches but it can be more demanding computationally. The Akaike information criterion was used when dealing with different network size; however, it is equally computational demanding [21]. With respect to the determination of the RBF neural network structure, a popular approach is to formulate it as a linear-in-the parameters’ problem, where all the training patterns/samples are usually used as the candidate RBF centers. To improve the network generalization, the regularized forward selection algorithm has been proposed [22], which combines subset selection with zero-order regularization. Backward selection methods have also been used in RBF center selection [23]. However, forward selection algorithms are thought to be superior to backward methods in terms of computational efficiency, however these methods have several major disadvantages such as being computationally too expensive or sometimes impossible to implement. The search for the optimal values of the nonlinear parameters (RBF centers and widths) is a continuous optimization problem. In order to optimize the RBF center and width parameters along with the network structure determination process, a sparse incremental regression (SIR) modeling method was proposed very recently to determine the network structure and the associated nonlinear parameters simultaneously [24]. This can deal with large dataset and improve the network significantly. Others include the moving k-means clustering to position the RBF centres with givens least squares to estimate the weights [25] and forward algorithm in RBF construction [9], to mention a few.

Different from existing methods in RBF neural network construction and multilayer perceptron, this paper proposes a novel hybrid (HSRF) feed-forward algorithm involving the multilayer perceptron (sigmoid) and non-Gaussian type radial basis transfer functions which is robust and performs parameter optimization within an integrated analytic framework, leading to two main technical advantages.(1)The network can be significantly improved through the optimization of the nonlinear RBF parameters on the continuous parameter space.(2)Using the speed of the multilayer perceptron and the simplicity and accuracy of RBF to produce fast and accurate model for rescue operations. In addition the paper uses coded genetic algorithm to train the proposed hybrid algorithm. Finally, network outcomes including mean iteration, standard variation, standard deviation, convergent time, and optimized error are evaluated using 5th order polynomial.

##### 1.1. Problem Statement and Objective

There are generally heavy casualties and tremendous loss of property in the event of accident such as fire, rock fall, flooding, or poisonous gasses as well as lose of human life in the mining sector [26]. This calls for a model that is fast and robust for monitoring and locating survivors to safety in times of accident. The justification of this work is that, the focus of current research is moving from system analysis of small-world networks to that of millions of nodes. This will demand large computers to process, and even if those computers are available, it will demand considerable time to run. This implies that there is the need for fast prediction algorithm using NN to memorize precalculated results to deal with large number of sensors (i.e., as sensors grow so rapidly to thousands and millions that battery drain will not permit calculations on the spot of a problem). In addition the base station can be destroyed in times of accident.

Further justification for a research like this is that the simple imitations of the human brain (called neural network models) demonstrate fast and accurate learning and classification properties in problems that otherwise require human experts. Although such tools cannot obviously replace human experts, they are used as on-the-fly diagnostic tools and supporting evidences in quick decision making. With these in mind the main objective of this study is to investigate and improve upon the Gaussian radial basis function and develop a non-Gaussian hybrid of MLP (sigmoid or SBF) and the compact radial basis functions (CRBF) with enhanced optimization features. From this an optimized hybrid model is assessed that has the highest predicted survival probability for an emergency rescue operation in an underground mining with genetic algorithm. The two main objectives examined in this paper are as follows.(i)To investigate the radial basis function of Gaussian model and remove the additional computation burden on the model by paralyzing the power operation on Gaussian model to generate a compact radial basis functions (non-Gaussian) literally but novel to reduce computational cost and increase processing efficiency. The study focuses on the use of absolute operation instead of square operation. In Figure 1 the green outline (for online viewing) represents additional requirement of resources in terms of time, cost, and so forth, assuming the resource is proportional to the calculated value, that is, zeros are not stored.(ii)To develop an optimized hybrid neural network called HSRF model with nonlinear weights of negative cosine, imposed on new compact radial basis function. This nonlinear weight was introduce to further reduce the RBF magnitude or element in the model for accuracy and maintained speed.