Mathematical Problems in Engineering

Volume 2016, Article ID 8153282, 9 pages

http://dx.doi.org/10.1155/2016/8153282

## Analytical Redundancy Design for Aeroengine Sensor Fault Diagnostics Based on SROS-ELM

Jiangsu Province Key Laboratory of Aerospace Power System, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 210016, China

Received 27 December 2015; Accepted 3 April 2016

Academic Editor: Wen Chen

Copyright © 2016 Jun Zhou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Analytical redundancy technique is of great importance to guarantee the reliability and safety of aircraft engine system. In this paper, a machine learning based aeroengine sensor analytical redundancy technique is developed and verified through hardware-in-the-loop (HIL) simulation. The modified online sequential extreme learning machine, selective updating regularized online sequential extreme learning machine (SROS-ELM), is employed to train the model online and estimate sensor measurements. It selectively updates the output weights of neural networks according to the prediction accuracy and the norm of output weight vector, tackles the problems of singularity and ill-posedness by regularization, and adopts a dual activation function in the hidden nodes combing neural and wavelet theory to enhance prediction capability. The experimental results verify the good generalization performance of SROS-ELM and show that the developed analytical redundancy technique for aeroengine sensor fault diagnosis based on SROS-ELM is effective and feasible.

#### 1. Introduction

As the control and health management of aircraft engine highly relies on the precise and reliable sensor measurements, sensor is one of the most important components of the aeroengine system. With the increase of control and monitoring variables, the types and number of sensors used in aeroengine are growing [1]. However, most sensors work in severe environment of high temperature, high pressure, and strong vibration which is also changing rapidly [2]; thus they are very vulnerable to breakdown. Therefore, some measures should be adopted to ensure the correctness of sensor readings. For this challenge, sensor redundancy technique is a good solution. Generally there are two kinds of sensor redundancy: one is hardware redundancy and the other is analytical redundancy [3]. Hardware redundancy adopts more than one sensor to measure the same engine variable, but it is at the price of extra cost, maintenance, additional weight, and more space to furnish sensors. Analytical redundancy constructs redundant estimates of the sensor readings using numerical algorithms to reduce the weight and cost of aeroengine. With the remarkable advantages, the analytical redundancy technique has been widely researched and applied in aeroengine system since it originated. Wallhagen and Arpasi demonstrated the application of sensor analytical redundancy technique for enhancing the reliability of engine control system [4]. Corley et al. developed a fault indication and correction (FICA) system and employed it to engine control system for T700, JTDE, and F404, which laid the theoretical basis and set an excellent example for the application of analytical redundancy in engine control system [5, 6].

The analytical redundancy design methods fall into three categories: model-based, data-driven, and the hybrid approach. Model-based techniques can be utilized to diagnose new sensor faults with no prior knowledge and experience, but it depends on accurate on-board adaptive engine model whose reliability may decline with the increase of modeling uncertainties and nonlinear complexities [7]. On the other side, the data-driven method needs no knowledge about the intricate engine working principle and complicated modeling skills and thus attracts lots of interest and concern. Botros et al. presented an application of optimized radial basis function neural networks based data mining to sensor faults detection on gas-turbine-driven compressor stations [8]. Huang employed autoassociative neural networks to detect sensor failures at the absence of models and reconstruct engine control system [9]. Joly et al. developed a gas-turbine diagnostics structure using several artificial neural networks for a high bypass ratio military turbofan engine [10]. Ogaji et al. also conceived artificial neural networks based sensor faults diagnosis for gas-turbine. The system was trained to detect, isolate, and assess faults [11]. However, most of the existed methods are trained offline, which cannot capture and adapt to the dynamic changes of system characteristics. Besides, conventional machine learning algorithms that the analytical redundancy technique is based on have weakness such as learning samples slowly and requiring massive computing resource.

Extreme learning machine (ELM) is a novel and efficient learning algorithm for training single-hidden layer feed-forward neural networks (SLFNs) proposed by Huang et al. [12]. It has been proven that ELM method has not only classification capability but also universal approximation capability [13, 14]. In addition, as verified in [14], ELM can learn much faster than traditional SVM, LS-SVM, and neural networks while achieving similar or much better generalization performance. Nevertheless, ELM is an offline learning algorithm. In order to learn data one-by-one or chunk-by-chunk online, Liang et al. proposed a fast and accurate online sequential extreme learning machine (OS-ELM) based on the idea of ELM [15]. Different from adjusting learning parameters iteratively like gradient-based neural networks does, it randomly generates input weights and hidden biases and determines the output weights analytical according to the sequentially arriving data. However, there are still limitations such as singularity and ill-posedness problems, potentially inconsistent and unstable performance. To alleviate such weaknesses, this paper proposed a modified online sequential extreme learning machine, selective updating regularized online sequential extreme learning machine (SROS-ELM). And then an aeroengine analytical redundancy technique based on that is developed and verified through HIL simulation.

The rest of this paper is organized as follows. Section 2 gives a brief review of OS-ELM. Section 3 presents SROS-ELM algorithm and evaluates its performance. Section 4 describes the analytical redundancy technique based on SROS-ELM in detail. The verification of the developed technique through hardware-in-the-loop (HIL) simulation is shown in Section 5. Conclusions are drawn in Section 6.

#### 2. Brief Review of OS-ELM

For arbitrary distinct samples , where is the input vector, is the target vector. The output function of SLFN with hidden neurons and activation function can be represented as where is the weight vector connecting the input layer and the* i*th hidden neuron, is the weight vector connecting the th hidden neuron and the output layer, and is the bias of the th hidden neuron.

The equations can be written in form of matrix where is named hidden layer output matrix [12]. The hidden node parameters and are simply assigned with random values and need not to be tuned. Thus, the output weight vector is the only parameter that needs to be calculated. The determination of the output weights can be simplified as seeking the least-square solution to the given linear system, which may be expressed aswhere is Moore-Penrose generalized inverse of matrix , which can be calculated through orthogonal projection as when is nonsingular. Substituting into (4), becomes

For (5), it is can be solved recursively by [15]where and the initialization procedure can be completed by and .

The OS-ELM algorithm is suitable for SLFNs with additive or RBF hidden nodes. It can deal with sequential learning tasks much faster than other sequential algorithms with good prediction accuracy. However, the analytical determination of the output weights according to formula (5) is on the hypothesis that is nonsingular which is not always satisfied. Besides, the ill-posedness or singularity problems have not been paid attention to, which may do great harm to the generalization performance. Furthermore, the output weights are always updated and the debasement in generalization performance because of some new arriving samples has not been considered. The solution for these problems will be investigated in the next section.

#### 3. SROS-ELM

In this section, the SROS-ELM is proposed on the basis of OS-ELM and its performance is evaluated on some benchmark data sets.

##### 3.1. Formula Derivation

On the basis of ridge regression theory [18], the stability will be improved and better generalization performance can be achieved by introducing a positive value , which is also called the regularization factor, to the diagonal elements of when determining the output weight vector [19]. Thus, formula (5) becomes as follows:In order to train the SLFNs online, now we need to derivate the recursive formula for updating the output weights.

For an initial training subset, , the initial output weight vector can be estimated by where , , and .

Supposing that there is a new sample , the output weights can be determined by where .

Let ; substituting this into (10), then the output weights become

In generalization, for the th observation, the output weights can be updated by is a matrix with the size of , where denotes the number of hidden nodes in neural networks. As the number of hidden nodes is usually quite large and the computation of the inverse matrix is resource consuming, the update formula for can be expressed using Woodbury formula [20] so as to save computing cost:When the new samples arrive one-by-one rather than block-by-block, formula (14) can be written in a simple format on the basis of Sherman-Morrison formula [20]:Let ; then the equations for updating becomeFurther, can be written as Let ; a simple recursive formula for the output weights can be achieved:

As investigated by Zhu et al. [21] and Bartlett [22], the prediction accuracy of neural networks is determined by not only the training error but also the norm of output weights. The networks with weights of smaller norm tend to perform superior generalization. Taking account of these two factors, the output weights are updated selectively according to both the prediction error and the norm of output weights:where denotes the prediction error of and is the threshold for decision.

Based on the theory proposed by [23], the activation functions of the hidden nodes have great impact on the performance of neural networks. As verified in [24], the neural networks taking inverse hyperbolic sine and Morlet wavelet as dual activation function can enhance dealing with nonlinearity and dynamic systems and achieve promising performance. Following that, this paper adopts a dual activation function in the hidden units combing neural and wavelet theory to improve prediction capability:

In summary, the procedure of the SROS-ELM goes as follows:(1)Initialization: for the initial training subset, , ,(a)randomly set the value of input weights and bias , and choose a proper ;(b)calculate hidden layer output matrix ;(c)estimate the initial output weight vector , , where and ;(d)set .(2)Updating output weights: for the new arriving training sample, ,(a)calculate (b)calculate and according to (18) and selectively update and ;(c)set . Go to step (a).

##### 3.2. Evaluation Test

As aeroengine analytical redundancy design using machine learning is a regression problem, the proposed SROS-ELM is evaluated on some real-world regression applications which are also used in [15] and the performance is compared with some popular algorithms. A sigmoid function is taken as hidden unit activation function for OS-ELM, while in SROS-ELM dual activations combining inverse hyperbolic sine function and Morlet wavelet are adopted. Since SROS-ELM has the same computational complexity as OS-ELM, there is no need to take the comparison of training and testing time into consideration. The root mean square error (RMSE) which is defined as the deviation measurement between the predicted and the target values is taken as the performance criterion. Table 1 has listed the average results of fifty trials on different benchmark data sets. From Table 1, it is obvious that the proposed SROS-ELM outperforms OS-ELM in generalization performance while the numbers of hidden nodes are the same. Moreover, the SROS-ELM always achieves smaller RMSE than some other popular sequential learning algorithms; thus it is more suitable for aeroengine analytical redundancy design which will be illustrated specifically in the next section.