Abstract

In the maintenance system of wind power units, shaft centerline orbit is an important feature to diagnosis the status of the unit. This paper presents the diagnosis of the orbit as follows: acquire characters of orbit by the affine invariant moments, take this as the characteristic parameters of neural networks to construct the identification model, utilize Simulated Annealing (SA) Algorithm to optimize the weights matrix of Hopfield neural network, and then some typical faults were selected as examples to identify. Experiment’s results show that SA-Hopfield identification model performed better than the previous methods.

1. Introduction

Because of the operating environment and the structure of wind turbines are very complex. Most failures are due to the vibrations. These vibrations would seriously affect the functioning of the unit and the consumption of pot life. Therefore, diagnosis of the turbine vibration fault is a very necessary work. Common diagnostic vibration signal extracted spectral information, so the information is limited and incomplete, and sometimes there exists nonlinear relationship between the feature data. However, the axial movement of the track unit is a state of the information carrier. Two drawbacks of the upstairs can be overcome, so it can reflect the fault information more efficiently and accurately.

Currently, the models of most researchers on Shaft Centerline Orbit are to extract the features and then utilize some pattern recognition methods for classification References [13] presented based on the Hu invariant moments feature extraction [46] proposed recognition method based on BP network. However, the above method has two problems to be solved, one cannot accurately describe the characteristics of the orbit of the centerline, and causes of the data are collected through site acquisition card to the amount of discrete. Secondly, the initial weights and thresholds of BP neural network are generated randomly; this will determine that the BP neural network is easy to fall into local optimum, so the precession rate of the diagnosis is very low.

To improve the accuracy of the classification model, we introduce the Hopfield neural network in this paper. Figure 1 shows the simple structure of a neuron by calculating the data, thresholds, and weights to get output data. Because of this nonlinear transmission capacity of neurons, neural network has been widely used in pattern recognition and the parameter fitting in recent years.

Artificial neural network (ANN) is a simulation of the nervous system of the brain, including many features of the system [711]. ANN is capable of learning the key information patterns within a multidimensional information domain and therefore could be used in many applications for tracking and prediction the complex energy systems [1215]. The human brain is a highly complex machine that can do nonlinear and parallel computing. The human brain can organize its neurons and it has a faster speed than today’s fastest computers in some particular computing (such as pattern recognition, perception, and motor control). Because the current system and the human brain mechanism of intelligence level, and related scientific and technical level are limited, neural network simulating the brain system in reasonable simplification and abstracting methods. Therefore, the neural network is a massively parallel distribution processing system based on a large scale of neural units. Neural network is similar to the brain in two aspects:(1)neural network obtains the knowledge by learning from the external environment,(2)the strength of connection interconnection of neurons (synaptic weights) stores the acquired knowledge.

Hopfield network [1619] is a kind of interconnection networks, and it introduces the concept of energy function which is similar to cutting Lyapunov function; the topological structure of the neural network (represented by the connection matrix) corresponds to optimal questions (described by the objective function) and converts it into neural network evolution of dynamical systems. Its evolution is a nonlinear dynamical system that can be used to describe a set of nonlinear differential agenda (discrete) or differential (continuous) to describe. Stability of the system can use so-called energy function analysis. Satisfying the conditions in the energy sort of energy function continues to decrease during the operation of the network and finally reaches equilibrium at stable state. For a nonlinear dynamical system, the state of the system from the initial state after a certain evolution of the results can be summarized as follows: asymptotically stable points, limit cycle, chaos, and state diverge.

Because the transformation function of artificial neural network is a bounded function, the divergence will not occur in the state of the system. Currently, the use of artificial neural networks often points to a gradual steady to solve many problems. Considering the stable point of the system as a memory, the processing from the beginning of evolution towards a stable state is to find the point of memory. If we regard the stable point as a minimum of an energy function and regard the energy function as an objective function, then the training process can be converted into an optimization problem. Thus, the evolution of Hopfield neural network is a computational associative memory or the process of solving optimization problems. In fact, it really does not need to be solved, but through feedback neural networks, its properly designed connection weights and input can be achieved.

The paper is organized as follows. In Section 2, we briefly propose the method of feature extraction of shaft centerline orbit. In Section 3, we introduce the Hopfield neural network and Simulated Annealing algorithm. We utilize SA to enhance the Hopfield NN on the classification precession. In Section 4, we present the details of our experiment and compare the precession between Hopfield NN and SA-Hopfield NN. Some conclusions on the consequences and further studies are discussed in Section 5.

2. Feature Extraction of Shaft Centerline Orbit

Orbit recognition problem is essentially a two-dimensional pattern recognition problem, mainly relying on extracting feature information to identify patterns. We use a discrete Hu invariant moments [4, 20, 21] to extract the features of shaft orbit.

2.1. Discrete Hu Invariant Moments

Definition 1 (moments and central moments). Due to the graphics measured by the acquisition card to a discrete amount, the image of the track is assumed discrete points, , . Thus, discrete moments and central moments are defined in the form as follows: wherein , , , , is the centroid of image. We can know from the above formulas that the central moments are translation invariance.

Definition 2 (translational invariance moment). By normalization can be performed with , which can satisfy the graphics translation invariance; the expression is

Definition 3 (translation and rotation invariant moments). Although satisfies translation invariance but fails to satisfy the rotational invariance, Hu researched seven complete second-order and third-order invariant moments; Hu’s seven invariant moments are as follows:

Invariant moment features defined by the above formulas can accurately reflect the basic shape features of orbit. For example, is the degree of divergence for the axis of the track metrics. The higher results in the greater degree of divergence of trajectories; the smaller is, the better graphics symmetry, which means that is the symmetry axis of the track metrics.

2.2. Improvement of the Hu Moments

In the calculation of the moments mentioned above, we find that Hu invariant moments are not invariant completely. Take the simplest graph circle as an example; when the radius increases, the torque is also increased, as shown in Table 1.

From Table 1, we can find that Hu invariant moments do not have the retractable invariants in the discrete model. It causes that the Hu invariant moments stretch deformation under continuous function, so these cannot be applied in the discrete case. R. Wong proved that the seven moments are still invariant when the scaling factor , and graphic rotation angle , the seven moments are still invariant.

Let and ; we can prove that by the formulas of . In order to remove the , we use the following algorithm:

The six invariant moments obtained by the above formulas of edge’s features of an image can describe the shape feature of orbit accurately and achieved the relationship between the characteristics and invariant moments.

3. Hopfield Neural Network and Simulated Annealing

3.1. Hopfield NN for Identify

Hopfield neural network is a feedback neural network, proposed in 1983 by the California Institute of Technology physicist Professor J. J. Hopfield. Based on the original feedback on neural networks, it added a concept of energy function in the network. Energy values in the perception and transmission are between neurons, and the output vector is passed through the network information in accordance with the weight update, constant iterative solution, so that the network continues to reduce and finally reaches a stable state, the output of the optimal solution. The basic structure of the Hopfield network is shown in Figure 2.

The network is divided into two parts of the input and output layers, the initial input vector: , the output vector is . The output layer feedback transports the output vector by the weight matrix .

The main processes of the Hopfield network identification model are detailed as follows.

Step 1. Extract the Hu invariant moments of every image.

Step 2. Convert to the six improved moments.

Step 3. Construct the Hopfield Neural Network.

Step 4. Identify the orbit.

It is well known that the connection weights of the neural network will greatly affect the performance. In order to improve the recognition precession of the network, in the following section, we introduce the simulated annealing algorithm to optimize the network.

3.2. Optimal by SA

SA algorithm [22, 23] (simulated annealing Simulated Annealing) as a heuristic search algorithm is widely used in a variety of combinatorial optimization problems. SA algorithm mimics the solid cooling process. It begins in a high initial temperature and gradually decreased, to optimize the results by iteration. Updated conditions during the optimization process for the solution are the objective function whose value is better than the original value, or satisfies the Boltzmann probability function. The algorithm steps are as follows.

Step 1. Initialize the network parameters .

Step 2. Decrease the temperature , the calculation of a new solution of , calculates an evaluation function value. If the relative error is less than 0, or satisfy the Boltzmann probability , accept and update the new solution, then let .

Step 3. Terminate the algorithm if the temperature drops below the threshold, or cannot find a better solution during a large number of iterations.

The solution we got by SA is the optimal weights of Hopfield neural network. In order to better demonstrate the algorithm, the flow chart of our method is shown in Figure 3.

4. Experimental Results

We choose three typical axis orbit shapes as the experimental samples, as shown in Figure 4, and their moments are calculated previously in Table 2.

The first line is elliptical. It generally corresponds to unbalance fault. We mark it as the first category.

The second line is external 8. It generally corresponds to no pair fault and is marked as the second category.

The third line is internal 8. As the third category, it generally corresponds to oil whirl fault.

Figure 5 shows a part of the images to be recognized; the seven moments of the corresponding images are shown in Table 3.

There exists singular data from Table 3; the data called singular value relative to other data is particularly large or very small, and the input data of Hopfield network is generally between −1 to 1. Therefore, the first data and a test sample data normalized Hopfield herein before conducting training. Data are expressed as follows: where max and min represented the maximum and minimum values as shown in Table 3.

After processing the original data, we use the standardized training data entry, establish the Hopfield network, and utilize Simulated Annealing algorithm to optimize it. In order to exhibit our method, the experiments are divided into four parts:(1)directly using the seven kinds of Hu moments to construct the Hopfield neural network,(2)directly using the seven kinds of Hu moments to construct the Hopfield neural network and using SA to optimaze the connecting weight,(3)using the six kinds of moments we converted to construct the Hopfield neural network,(4)using the six kinds of moments we converted to construct the Hopfield neural network and using SA to optimize the connecting weight.

The identify results are shown in Table 4.

From Table 4 we can clearly find that SA-Hopfield reached the highest accuracy. Figure 6 shows the identify precession of the four methods.

5. Conclusion

In this paper, two-dimensional graphics and moment invariants act as input feature vectors of Hopfield neural network and Simulated Annealing algorithm is utilized to optimize the weight matrix of the network. Through the simulation, the experiment results show that our method performed better than the previous method. However, there still exists some problem that begs further study. How to obtain the reference samples accurately is one of the most difficulty processes. The feature vector under various fault states needed to get through a long stage of exploration; there is no formation of a comprehensive and clear consensus. This paper only considered three common faults (unbalance fault, no pair fault, and oil whirl fault) to construct experiment. Further work will contain more complex structures. Other methods such as [24] should be discovered and used.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.