Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2014, Article ID 571354, 6 pages
http://dx.doi.org/10.1155/2014/571354
Research Article

Identification of Shaft Centerline Orbit for Wind Power Units Based on Hopfield Neural Network Improved by Simulated Annealing

North China University of Water Resources and Electric Power, Zhengzhou 450011, China

Received 3 February 2014; Accepted 23 February 2014; Published 26 March 2014

Academic Editor: Zhijun Zhang

Copyright © 2014 Kun Ren and Jihong Qu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In the maintenance system of wind power units, shaft centerline orbit is an important feature to diagnosis the status of the unit. This paper presents the diagnosis of the orbit as follows: acquire characters of orbit by the affine invariant moments, take this as the characteristic parameters of neural networks to construct the identification model, utilize Simulated Annealing (SA) Algorithm to optimize the weights matrix of Hopfield neural network, and then some typical faults were selected as examples to identify. Experiment’s results show that SA-Hopfield identification model performed better than the previous methods.

1. Introduction

Because of the operating environment and the structure of wind turbines are very complex. Most failures are due to the vibrations. These vibrations would seriously affect the functioning of the unit and the consumption of pot life. Therefore, diagnosis of the turbine vibration fault is a very necessary work. Common diagnostic vibration signal extracted spectral information, so the information is limited and incomplete, and sometimes there exists nonlinear relationship between the feature data. However, the axial movement of the track unit is a state of the information carrier. Two drawbacks of the upstairs can be overcome, so it can reflect the fault information more efficiently and accurately.

Currently, the models of most researchers on Shaft Centerline Orbit are to extract the features and then utilize some pattern recognition methods for classification References [13] presented based on the Hu invariant moments feature extraction [46] proposed recognition method based on BP network. However, the above method has two problems to be solved, one cannot accurately describe the characteristics of the orbit of the centerline, and causes of the data are collected through site acquisition card to the amount of discrete. Secondly, the initial weights and thresholds of BP neural network are generated randomly; this will determine that the BP neural network is easy to fall into local optimum, so the precession rate of the diagnosis is very low.

To improve the accuracy of the classification model, we introduce the Hopfield neural network in this paper. Figure 1 shows the simple structure of a neuron by calculating the data, thresholds, and weights to get output data. Because of this nonlinear transmission capacity of neurons, neural network has been widely used in pattern recognition and the parameter fitting in recent years.

571354.fig.001
Figure 1: Structure of Neurons.

Artificial neural network (ANN) is a simulation of the nervous system of the brain, including many features of the system [711]. ANN is capable of learning the key information patterns within a multidimensional information domain and therefore could be used in many applications for tracking and prediction the complex energy systems [1215]. The human brain is a highly complex machine that can do nonlinear and parallel computing. The human brain can organize its neurons and it has a faster speed than today’s fastest computers in some particular computing (such as pattern recognition, perception, and motor control). Because the current system and the human brain mechanism of intelligence level, and related scientific and technical level are limited, neural network simulating the brain system in reasonable simplification and abstracting methods. Therefore, the neural network is a massively parallel distribution processing system based on a large scale of neural units. Neural network is similar to the brain in two aspects:(1)neural network obtains the knowledge by learning from the external environment,(2)the strength of connection interconnection of neurons (synaptic weights) stores the acquired knowledge.

Hopfield network [1619] is a kind of interconnection networks, and it introduces the concept of energy function which is similar to cutting Lyapunov function; the topological structure of the neural network (represented by the connection matrix) corresponds to optimal questions (described by the objective function) and converts it into neural network evolution of dynamical systems. Its evolution is a nonlinear dynamical system that can be used to describe a set of nonlinear differential agenda (discrete) or differential (continuous) to describe. Stability of the system can use so-called energy function analysis. Satisfying the conditions in the energy sort of energy function continues to decrease during the operation of the network and finally reaches equilibrium at stable state. For a nonlinear dynamical system, the state of the system from the initial state after a certain evolution of the results can be summarized as follows: asymptotically stable points, limit cycle, chaos, and state diverge.

Because the transformation function of artificial neural network is a bounded function, the divergence will not occur in the state of the system. Currently, the use of artificial neural networks often points to a gradual steady to solve many problems. Considering the stable point of the system as a memory, the processing from the beginning of evolution towards a stable state is to find the point of memory. If we regard the stable point as a minimum of an energy function and regard the energy function as an objective function, then the training process can be converted into an optimization problem. Thus, the evolution of Hopfield neural network is a computational associative memory or the process of solving optimization problems. In fact, it really does not need to be solved, but through feedback neural networks, its properly designed connection weights and input can be achieved.

The paper is organized as follows. In Section 2, we briefly propose the method of feature extraction of shaft centerline orbit. In Section 3, we introduce the Hopfield neural network and Simulated Annealing algorithm. We utilize SA to enhance the Hopfield NN on the classification precession. In Section 4, we present the details of our experiment and compare the precession between Hopfield NN and SA-Hopfield NN. Some conclusions on the consequences and further studies are discussed in Section 5.

2. Feature Extraction of Shaft Centerline Orbit

Orbit recognition problem is essentially a two-dimensional pattern recognition problem, mainly relying on extracting feature information to identify patterns. We use a discrete Hu invariant moments [4, 20, 21] to extract the features of shaft orbit.

2.1. Discrete Hu Invariant Moments

Definition 1 (moments and central moments). Due to the graphics measured by the acquisition card to a discrete amount, the image of the track is assumed discrete points, , . Thus, discrete moments and central moments are defined in the form as follows: wherein , , , , is the centroid of image. We can know from the above formulas that the central moments are translation invariance.

Definition 2 (translational invariance moment). By normalization can be performed with , which can satisfy the graphics translation invariance; the expression is

Definition 3 (translation and rotation invariant moments). Although satisfies translation invariance but fails to satisfy the rotational invariance, Hu researched seven complete second-order and third-order invariant moments; Hu’s seven invariant moments are as follows:

Invariant moment features defined by the above formulas can accurately reflect the basic shape features of orbit. For example, is the degree of divergence for the axis of the track metrics. The higher results in the greater degree of divergence of trajectories; the smaller is, the better graphics symmetry, which means that is the symmetry axis of the track metrics.

2.2. Improvement of the Hu Moments

In the calculation of the moments mentioned above, we find that Hu invariant moments are not invariant completely. Take the simplest graph circle as an example; when the radius increases, the torque is also increased, as shown in Table 1.

tab1
Table 1: Moments of different radius.

From Table 1, we can find that Hu invariant moments do not have the retractable invariants in the discrete model. It causes that the Hu invariant moments stretch deformation under continuous function, so these cannot be applied in the discrete case. R. Wong proved that the seven moments are still invariant when the scaling factor , and graphic rotation angle , the seven moments are still invariant.

Let and ; we can prove that by the formulas of . In order to remove the , we use the following algorithm:

The six invariant moments obtained by the above formulas of edge’s features of an image can describe the shape feature of orbit accurately and achieved the relationship between the characteristics and invariant moments.

3. Hopfield Neural Network and Simulated Annealing

3.1. Hopfield NN for Identify

Hopfield neural network is a feedback neural network, proposed in 1983 by the California Institute of Technology physicist Professor J. J. Hopfield. Based on the original feedback on neural networks, it added a concept of energy function in the network. Energy values in the perception and transmission are between neurons, and the output vector is passed through the network information in accordance with the weight update, constant iterative solution, so that the network continues to reduce and finally reaches a stable state, the output of the optimal solution. The basic structure of the Hopfield network is shown in Figure 2.

571354.fig.002
Figure 2: Structure of Hopfield neural network.

The network is divided into two parts of the input and output layers, the initial input vector: , the output vector is . The output layer feedback transports the output vector by the weight matrix .

The main processes of the Hopfield network identification model are detailed as follows.

Step 1. Extract the Hu invariant moments of every image.

Step 2. Convert to the six improved moments.

Step 3. Construct the Hopfield Neural Network.

Step 4. Identify the orbit.

It is well known that the connection weights of the neural network will greatly affect the performance. In order to improve the recognition precession of the network, in the following section, we introduce the simulated annealing algorithm to optimize the network.

3.2. Optimal by SA

SA algorithm [22, 23] (simulated annealing Simulated Annealing) as a heuristic search algorithm is widely used in a variety of combinatorial optimization problems. SA algorithm mimics the solid cooling process. It begins in a high initial temperature and gradually decreased, to optimize the results by iteration. Updated conditions during the optimization process for the solution are the objective function whose value is better than the original value, or satisfies the Boltzmann probability function. The algorithm steps are as follows.

Step 1. Initialize the network parameters .

Step 2. Decrease the temperature , the calculation of a new solution of , calculates an evaluation function value. If the relative error is less than 0, or satisfy the Boltzmann probability , accept and update the new solution, then let .

Step 3. Terminate the algorithm if the temperature drops below the threshold, or cannot find a better solution during a large number of iterations.

The solution we got by SA is the optimal weights of Hopfield neural network. In order to better demonstrate the algorithm, the flow chart of our method is shown in Figure 3.

571354.fig.003
Figure 3: Diagram of our Identify method.

4. Experimental Results

We choose three typical axis orbit shapes as the experimental samples, as shown in Figure 4, and their moments are calculated previously in Table 2.

tab2
Table 2: Training data of Hopfield neural network.
571354.fig.004
Figure 4: Examples of testing images.

The first line is elliptical. It generally corresponds to unbalance fault. We mark it as the first category.

The second line is external 8. It generally corresponds to no pair fault and is marked as the second category.

The third line is internal 8. As the third category, it generally corresponds to oil whirl fault.

Figure 5 shows a part of the images to be recognized; the seven moments of the corresponding images are shown in Table 3.

tab3
Table 3: Test data of Hopfield neural network.
571354.fig.005
Figure 5: Examples of training images.

There exists singular data from Table 3; the data called singular value relative to other data is particularly large or very small, and the input data of Hopfield network is generally between −1 to 1. Therefore, the first data and a test sample data normalized Hopfield herein before conducting training. Data are expressed as follows: where max and min represented the maximum and minimum values as shown in Table 3.

After processing the original data, we use the standardized training data entry, establish the Hopfield network, and utilize Simulated Annealing algorithm to optimize it. In order to exhibit our method, the experiments are divided into four parts:(1)directly using the seven kinds of Hu moments to construct the Hopfield neural network,(2)directly using the seven kinds of Hu moments to construct the Hopfield neural network and using SA to optimaze the connecting weight,(3)using the six kinds of moments we converted to construct the Hopfield neural network,(4)using the six kinds of moments we converted to construct the Hopfield neural network and using SA to optimize the connecting weight.

The identify results are shown in Table 4.

tab4
Table 4: Precession of different method.

From Table 4 we can clearly find that SA-Hopfield reached the highest accuracy. Figure 6 shows the identify precession of the four methods.

571354.fig.006
Figure 6: Identify precession.

5. Conclusion

In this paper, two-dimensional graphics and moment invariants act as input feature vectors of Hopfield neural network and Simulated Annealing algorithm is utilized to optimize the weight matrix of the network. Through the simulation, the experiment results show that our method performed better than the previous method. However, there still exists some problem that begs further study. How to obtain the reference samples accurately is one of the most difficulty processes. The feature vector under various fault states needed to get through a long stage of exploration; there is no formation of a comprehensive and clear consensus. This paper only considered three common faults (unbalance fault, no pair fault, and oil whirl fault) to construct experiment. Further work will contain more complex structures. Other methods such as [24] should be discovered and used.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. P. Guo, X. Luo, Y. Wang, L. Bai, and H. Li, “Identification of shaft centerline orbit for hydropower units based on particle swarm optimization and improved BP neural network,” Proceedings of the Chinese Society of Electrical Engineering, vol. 31, no. 8, pp. 93–97, 2011. View at Google Scholar · View at Scopus
  2. X. Q. Zhou and J. Wang, “Design of shaft centerline analyzer based on virtual instrument,” China Measurement & Test, vol. 6, no. 36, pp. 45–48, 2010. View at Google Scholar
  3. B. Li, S. X. Shi, and S. Wang, “Image recognition based on chaotic-particle swarm-optimization-neural network algorithm,” Advanced Materials Research, vol. 655, pp. 969–973, 2013. View at Google Scholar
  4. M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Transactions on Information Theory, vol. 8, no. 2, pp. 179–187, 1962. View at Google Scholar
  5. J. Tong, Z. T. Wu, and G. B. Yan, “Fault diagnosis of large rotating machinery using BP network and hidden markov models,” Journal of Vibration, Measurement & Diagnosis, vol. 3, no. 19, pp. 193–195, 1999. View at Google Scholar
  6. L. Zhao and Z. Sheng, “Combination of discrete cosine transform with neural network in fault diagnosis for rotating machinery,” in Proceedings of the IEEE International Conference on Industrial Technology, pp. 450–454, December 1994. View at Scopus
  7. A. K. Krishnamurthy, S. C. Ahalt, D. E. Melton, and P. Ohen, “Neural networks for vector quantization of speech and images,” IEEE Journal on Selected Areas in Communications, vol. 8, no. 8, pp. 1449–1457, 1990. View at Publisher · View at Google Scholar · View at Scopus
  8. I. Aleksander and H. B. Morton, “General neural unit. Retrieval performance,” Electronics Letters, vol. 27, no. 19, pp. 1776–1778, 1991. View at Google Scholar · View at Scopus
  9. R. Rodriguez, I. Bukovsky, and N. Homma, “Potentials of quadratic neural unit for applications,” International Journal of Software Science and Computational Intelligence, vol. 3, no. 3, pp. 1–12, 2011. View at Google Scholar
  10. J. Wang and W. Wan, “Application of desirability function based on neural network for optimizing biohydrogen production process,” International Journal of Hydrogen Energy, vol. 34, no. 3, pp. 1253–1259, 2009. View at Publisher · View at Google Scholar · View at Scopus
  11. J. Wang and W. Wan, “Optimization of fermentative hydrogen production process using genetic algorithm based on neural network and response surface methodology,” International Journal of Hydrogen Energy, vol. 34, no. 1, pp. 255–261, 2009. View at Publisher · View at Google Scholar · View at Scopus
  12. S. A. Kalogirou, “Applications of artificial neural-networks for energy systems,” Applied Energy, vol. 67, no. 1-2, pp. 17–35, 2000. View at Google Scholar · View at Scopus
  13. H. F. Tuo, “Thermal, economic analysis of a transcritical Rankine power cycle with reheat enhancement for a low, grade heat source,” International Journal of Energy Research, vol. 37, no. 8, pp. 857–8867, 2013. View at Publisher · View at Google Scholar
  14. M. C. Mabel and E. Fernandez, “Analysis of wind power generation and prediction using ANN: a case study,” Renewable Energy, vol. 33, no. 5, pp. 986–992, 2008. View at Publisher · View at Google Scholar · View at Scopus
  15. H. F. Tuo, “Energy and exergy-based working fluid selection for organic Rankine cycle recovering waste heat from high temperature solid oxide fuel cell and gas turbine hybrid systems,” International Journal of Energy Research, vol. 37, no. 14, pp. 1831–1841, 2013. View at Publisher · View at Google Scholar
  16. T. Sun and X. Wu, “Image restoration based on parallel GA and hopfield NN,” in Proceedings of the 9th International Symposium on Distributed Computing and Applications to Business, Engineering and Science (DCABES '10), pp. 565–567, August 2010. View at Publisher · View at Google Scholar · View at Scopus
  17. Y. Uwate, Y. Nishio, T. Ueta, T. Kawabe, and T. Ikeguchi, “Performance of chaos and burst noises injected to the Hopfield NN for quadratic assignment problems,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. 87, no. 4, pp. 937–943, 2004. View at Google Scholar · View at Scopus
  18. T. Yalcinoz and M. J. Short, “Large-scale economic dispatch using an improved Hopfield neural network,” IEE Proceedings, Generation, Transmission and Distribution, vol. 144, no. 2, pp. 181–185, 1997. View at Publisher · View at Google Scholar
  19. T. Yalcinoz and M. J. Short, “Neural networks approach for solving economic dispatch problem with transmission capacity constraints,” IEEE Transactions on Power Systems, vol. 13, no. 2, pp. 307–313, 1998. View at Publisher · View at Google Scholar · View at Scopus
  20. Y. Li, “Reforming the theory of invariant moments for pattern recognition,” Pattern Recognition, vol. 25, no. 7, pp. 723–730, 1992. View at Publisher · View at Google Scholar · View at Scopus
  21. J. C. Terrillon, M. David, and S. Akamatsu, “Automatic detection of human faces in natural scene images by use of a skin color model and of invariant moments,” in Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition, pp. 112–117, 1998.
  22. E. H. L. Aarts and J. H. M. Korst, “Boltzmann machines as a model for parallel annealing,” Algorithmica, vol. 6, no. 1–6, pp. 437–465, 1991. View at Publisher · View at Google Scholar · View at Scopus
  23. H. Szu and R. Hartley, “Fast simulated annealing,” Physics Letters A, vol. 122, no. 3-4, pp. 157–162, 1987. View at Google Scholar · View at Scopus
  24. W. Jiang, J. A. Joens, D. Dionysiou, and K. E. O'Shea, “Optimization of photocatalytic performance of TiO2 coated glass microspheresusing response surface methodology and the application for degradation ofdimethyl phthalate,” Journal of Photochemistry and Photobiology A, vol. 262, pp. 7–13, 2013. View at Publisher · View at Google Scholar