Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2014, Article ID 847623, 8 pages
Research Article

Fault Diagnosis of Oil-Immersed Transformers Using Self-Organization Antibody Network and Immune Operator

School of Electrical Engineering, Northeast Dianli University, Jilin 132012, China

Received 2 July 2014; Accepted 18 August 2014; Published 16 November 2014

Academic Editor: Yoshinori Hayafuji

Copyright © 2014 Liwei Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


There are some drawbacks when diagnosis techniques based on one intelligent method are applied to identify incipient faults in power transformers. In this paper, a hybrid immune algorithm is proposed to improve the reliability of fault diagnosis. The proposed algorithm is a hybridization of self-organization antibody network (soAbNet) and immune operator. There are two phases in immune operator. One is vaccination, and the other is immune selection. In the process of vaccination, vaccines were obtained from training dataset by using consistency-preserving K-means algorithm (K-means-CP algorithm) and were taken as the initial antibodies for soAbNet. After the soAbNet was trained, immune selection was applied to optimize the memory antibodies in the trained soAbNet. The effectiveness of the proposed algorithm is verified using benchmark classification dataset and real-world transformer fault dataset. For comparison purpose, three transformer diagnosis methods such as the IEC criteria, back propagation neural network (BPNN), and soAbNet are utilized. The experimental results indicate that the proposed approach can extract the dataset characteristics efficiently and the diagnostic accuracy is higher than that obtained with other individual methods.

1. Introduction

Power transformers are considered as highly essential equipment of electric power transmission systems and often the most expensive devices in a substation. Failures of large power transformers can cause operational problems to the transmission system [1].

Dissolved gas analysis (DGA) is one of the most widely used tools to diagnose the condition of oil-immersed transformers in service. The ratios of certain dissolved gases in the insulating oil can be used for qualitative determination of fault types. Several criteria have been established to interpret results from laboratory analysis of dissolved gases, such as IEC 60599 [2] and IEEE Std C57.104-2008 [3]. However, analysis of these gases generated in mineral-oil-filled transformers is often complicated for fault interpretation, which is dependent on equipment variables.

With the development of artificial intelligence, various intelligent methods have been applied to improve the DGA reliability for oil-immersed transformers. Based on DGA technique, an expert system was proposed for transformer fault diagnosis and corresponding maintenance actions, and the test results showed that it was effective [4]. By using fuzzy information approach, Tomsovic et al. [5] developed a framework that combined several transformer diagnostic methods to provide the “best” conclusion. Based on artificial neural network (ANN), Zhang et al. presented a two-step method for fault detection in oil-filled transformer, and the proposed approach achieved good diagnostic accuracy [6]. Lin et al. proposed a novel fault diagnosis method for power transformer based on Cerebellar Model Articulation Controller (CMAC), and the new scheme was shown with high accuracy [7]. Huang presented a fault detection approach of oil-immersed power transformers based on genetic algorithm tuned wavelet networks (GAWNs) demonstrating remarkable diagnosis accuracy [8]. Chen et al. studied the efficiency of wavelet networks (WNs) for transformer fault detection using gases-in-oil samples, and the diagnostic accuracy and efficiency were proved better than these derived from BPNN [9]. A fault classifier for power transformer was proposed by Zheng et al. based on multiclass least square support vector machines (LS-SVM) [10]. Hao and Cai-Xin developed an artificial immune algorithm for transformer fault detection [11]. These novel diagnosis methods overcome the drawbacks of IEC method and improve the diagnosis accuracy. However, each intelligent method has its own advantages and disadvantages, and there are limitations when one method is applied to transformer fault diagnosis [1214].

At present, hybridization of two or more intelligent methods becomes one of the most intensively growing areas as well as a new way for transformer fault diagnosis [15, 16]. A multilevel decision-making model was proposed for incipient fault detection of oil-immersed transformer by using K-nearest neighbor (K-NN) to improve the classification results of SVM [17]. Morais and Rolim developed an integrated method to detect incipient faults in power transformers combining some traditional criteria and other two intelligent techniques [18]. Shintemirov et al. utilized bootstrap to preprocess DGA samples and extract feature with genetic programming (GP), and the combined approach could improve the fault diagnosis accuracy [19]. A hybrid neural fuzzy method was presented for dissolved gas analysis combining neural fuzzy model and competitive learning [20]. Du and Wang provided a priori knowledge for artificial immune network using adaptive resonance network (ART) [21]. It provides inspiration for the combination of artificial immune network and other intelligent methods. The existing literatures show that making full use of a priori knowledge and combination with multiple algorithms can effectively improve the diagnosis reliability for dissolved gas analysis.

As an artificial intelligence (AI) technique, artificial immune system (AIS) is inspired by biological immunology aiming at solving engineering or complex computational problems [22]. Numerous studies have been done on AIS for machine learning and shown tremendous potential in data classification, pattern recognition, and function optimization [2325]. As a novel artificial immune system, self-organization antibody network (soAbNet) learns and memorizes the characters of antigens effectively by three different strategies [26]. In general, the initial antibodies in soAbNet are randomly selected from training dataset. However, the learning process of the immune algorithm is sensitive to initial antibodies. On the other hand, there is no network suppression mechanism to control the specificity level of the memory antibodies. As a result, redundant antibodies would exist in the trained network.

Based on the works above, this paper proposes a hybrid immune algorithm combining immune operator and self-organization antibody network for transformer fault diagnosis. There are two phases in immune operator. The first step is vaccination, and the other one is immune selection. In immune operator, vaccination preprocesses transformer fault samples by using cluster K-means-CP algorithm, and then the obtained cluster centers are taken as initial antibodies in soAbNet. During the training of soAbNet, memory antibodies were suppressed by immune selection of immune operator to enhance the antibody quality. In order to show the validity of the proposed algorithm, experiments are performed and the results are illustrated.

2. Self-Organization Antibody Network

2.1. Brief Description of soAbNet

In soAbNet, antibodies and T-cells are abstracted as antibodies ignoring their difference [26]. Antibodies of different class are regarded as vertices in the immune network, and antibodies of the same class are connected to form a semiconnected graph. Each antibody has concentration as its weight, representing its recognition capability. The whole network is expressed as an incomplete connected graph with weighted vertices. Thus, each antibody has abilities of recognition, response, and memory for antigens. Antibodies learn, memorize, and recognize antigens by antigen-antibody and antibody-antibody interactions.

Figure 1 shows a soAbNet consisting of three classes of antibody. Numbers 1, 2, and 3 represent the type of each antibody, respectively. Antibodies of the same type are connected via class information, and is the antibody concentration.

Figure 1: Architecture of self-organization antibody network.
2.2. Antigen and Antibody Coding

In self-organization antibody network, antigen (Ag) and antibody (Ab) are encoded in real value, and their coding structures are different from each other.

(1) Antigen Coding. Each antigen is regarded as a point in its shape space and is represented by an -dimensional vector as: Each attribute of vector is supposed to represent a feature of the corresponding antigen.

(2) Antibody Coding. Antibody coding consists of antibody basic information and its attribute information. The antibody construction can be defined as Here, is the class antibody belongs to, is its concentration, and are the dimensional attributes representing the feature of .

2.3. Learning and Memory Strategies

There are three learning strategies in soAbNet as follows.

(1) Antibody Evolution. Antibody in the immune network learns and memorizes the characteristic of input antigen by concentration adjustment and attribute value adjustment. The antibody evolution is executed according to the following updating equation:

(2) Antibody Combination. In soAbNet, two antibodies of the same class combine into one antibody to eliminate redundancy by concentration adjustment and attribute value adjustment when their affinity is less than a given threshold. Antibodies and combine into one antibody , and the original antibodies and would be removed from memory antibody set. can be calculated by

(3) Antibody Generation. In soAbNet, when the memory antibodies cannot identify the input antigen during training process, new antibody is added to the network to learn and memorize the input antigen. Whereas is the input antigen, new antibody can be added to the memory antibody set by

2.4. Learning Algorithm

The learning algorithm of soAbNet can be described as follows.

Step 1 (initialization). Randomly select samples from training dataset as initial antibodies.

Step 2. For each input antigen , do the following.

Step 2.1. Calculate the affinity between and each antibody in the network.

Step 2.2. Select the antibody , where the affinity between and is the highest.

Step 2.3. If type of is different from type of , execute antibody generation, and go to Step 3.

Step 2.4. If type of is the same as type , is regarded as the best recognition antibody .

Step 2.5. Calculate the affinities between and each antibody of the same class. Select , where the affinity between and is the highest.

Step 2.6. If the affinity between and is higher than the affinity between and , execute antibody combination between and and antibody generation according to the input antigen , and go to Step 3.

Step 2.7. Execute antibody evolution between and .

Step 3. If stopping criterion is met, end training; otherwise, go to Step 2.

The affinity is measured by Euclidean distance between two vectors. The smaller the distance value is, the higher the affinity will be. The stopping criterion is that the amounts of memory antibodies in the network are the same between two successive training processes. The network output is mature memory antibody set representing the antigen characteristic of each class, which can be used for pattern recognition directly.

3. Proposed Hybrid Immune Algorithm

As most immune algorithms, the initial antibodies in soAbNet are randomly selected from training set, which does not make full use of prior knowledge of training set. Whereas learning process of the immune algorithm is sensitive to initial antibodies, the randomly selected initial antibodies are of different distribution in shape-space, resulting in different convergence rate of the network during learning process, and the quality of memory antibodies is variable. As a result, the network performance is unstable, and the recognition capability of antigen is affected to some extent.

On the other hand, there is no network compression mechanism to simplify network structure. The network size would increase with the increasing of initial antibodies and training times. As a result, a number of redundant antibodies would exist in the trained network.

To improve the performance of soAbNet, a hybrid immune algorithm combining immune operator and soAbNet is proposed in this section.

3.1. Immune Operator

The immune operator is a two-step process, that is, vaccination and immune selection. Vaccination is used to provide initial antibodies, whereas immune selection is used to optimize memory antibodies [27]. The quality of vaccine affects the convergence rate of learning process but does not affect the convergence of the algorithm which is guaranteed by the immune selection in the learning process. In the next two sections, vaccination and immune selection were constructed in accordance with soAbNet, respectively.

3.2. Vaccination

As an advantage, biological immune system can learn how to identify a new infection by using vaccines. Vaccines make the body produce antibodies by specific pathogens, in order to speed up the response to infections.

In artificial immune systems, vaccines would be obtained from the known information of the problem to be solved, in order to guide the learning process. In practice, vaccines can be obtained by using optimization algorithms.

In this study, consistency-preserving K-means algorithm (K-means-CP) was used to obtain vaccines for each class from training set, respectively. Then, the vaccines, that is, the cluster centers, were taken as the initial antibodies in soAbNet.

The number of vaccines, that is, the cluster number , is assigned according to the complexity of training dataset. Since there have been many papers to study on determining optimal cluster number of a dataset [2830], estimating the cluster number would not be discussed in this section.

K-means-CP algorithm can be described as follow [31].

Step 1. Initialize cluster centers .

Step 2. Assign cluster membership. Assign one closed-neighbor-set at a time. Assign all objects of the closed-neighbor-set to the closest cluster , where the closeness is defined in average sense; that is,

Step 3. Update centers: Here, is the centroid of cluster and .

Step 4. Iterate Steps 2 and 3 until converge.
The objective function is the sum of error squared,

K-means-CP algorithm and the standard K-means have the same efficiency and convergence property [31]. But there is still some difference between the two algorithms. Each unit of the cluster is assigned to an object in K-means algorithm, while, in K-means-CP algorithm, each unit of the cluster is assigned to the closed-neighbor-set in order to enforce strict transitivity.

3.3. Immune Selection

The immune selection corresponds to immune regulation mechanism in biological immune system. It is accomplished by the following two steps: immune detection and antibody selection. During the training process of soAbNet, immune detection firstly calculates the affinities between pairwise antibodies of the memory antibodies. Then antibody selection combines the two antibodies into one through antibody combination, between which the affinity is less than a preset affinity threshold .

The value of affinity threshold has important effects upon the amount and the distribution of memory antibodies. Generally, affinity threshold can be initially preset at a smaller value (such as ), and then could be assigned appropriate value finally after analyzing the results with its value increase.

The immune selection has the robustness in enhancing the positive function of the vaccination and eliminating its negative influence.

3.4. Procedure of Hybrid Immune Algorithm

The process of the proposed hybrid algorithm is illustrated as in Figure 2.

Figure 2: Flowchart of the proposed hybrid immune algorithm.

The proposed hybrid immune algorithm is trained according the steps as follows.

(1) Data Preprocessing. In order to improve the generalization performance of the proposed immune algorithm, the given dataset is normalized. Then, training dataset is selected at random and the rest is taken as testing dataset.

(2) Vaccination. As the first phase of immune operator, vaccination obtains antibodies from the training dataset using K-means-CP algorithm. Then the obtained vaccines are taken as the initial antibodies in the soAbNet and encoded according to antibody coding.

(3) Training soAbNet. Train the soAbNet with the initial antibodies on the training dataset, and get antibody set.

(4) Immune Selection. The antibody set in soAbNet can become memory antibodies after immune detection and antibody selection.

(5) Classification. Testing dataset is encoded according to antigen coding and classified with memory antibodies to validate the trained classifier.

4. Experiment Results and Discussion

4.1. Experiment on Four-Class Dataset

In order to validate the antibody optimization of the hybrid algorithm, four-class dataset was chosen from UCI machine learning dataset for algorithm test compared with soAbNet.

Four-class dataset is a typical classification dataset provided by Ho and Kleinberg [32]. It was obtained by preprocessing 4 kinds of two-dimensional samples into 2 classes. The dataset consists of 2 class datasets, 862 instances in total. Each instance includes 2 feature attributes. The range of each attribute is .

In this section, 684 instances were randomly selected as training dataset and the rest of the 172 instances as testing dataset. The vaccine number was set as 10; the vaccine number of each class was 5. Each type instances from the training dataset were clustered by K-means-CP algorithm; the obtained 10 cluster centers were taken as initial antibodies. Affinity threshold was set as 0.25. In order to compare the test results, the initial antibody number of each type in soAbNet was set as 5. After being trained, the classification accuracy on test dataset was 100% for the two immune algorithms. Memory antibody distributions are illustrated as Figures 3 and 4, respectively.

Figure 3: Distribution of memory antibodies in soAbNet.
Figure 4: Distribution of memory antibodies in the proposed method.

Through the comparison and analysis between Figures 3 and 4, it can be observed that the developed algorithm gets lesser memory antibodies than soAbNet with the same numbers of initial antibodies, whereas with the same correct recognition rate. The memory antibody distribution of the proposed algorithm is more reasonable. The number of antibodies is less in the simple classification space; the number of antibodies increases in the cross-space of two classes.

4.2. Experiments on Three Other Data Sets

To validate the classification capacity of the hybrid immune algorithm, experiments are performed on three other benchmark datasets, which are from the UCI databases [33]. The three real-world data sets are described as in Table 1.

Table 1: The datasets used.

In this section, the experimental comparisons are made among the proposed hybrid immune algorithm, soAbNet and back propagation neural network (BPNN) with 10-fold cross-validation on the three different datasets. For comparison purpose, the number of initial antibodies in soAbNet and the number of vaccines in the proposed algorithm were set to be the same as 8, 14, and 12 for iris, wine, and breast tissue datasets, respectively. The affinity threshold was set as 0.16, 0.2, and 0.11 for iris, wine, and breast tissue datasets, respectively. The number of hidden layer nodes was set as 10, 15, and 8 for iris, wine, and breast tissue datasets, respectively. Experimental results, including the average training accuracy and the average testing accuracy, are given in Table 2.

Table 2: Training and testing accuracy comparison (%).

From the results shown in Table 2, we can conclude that the proposed algorithm demonstrates excellent classification performance. Besides, it gains good training accuracy and meanwhile maintains good generalization ability.

4.3. Transformer Fault Diagnosis

There are five dissolved gases in oil-immersed transformers: hydrogen (H2), ethylene (C2H4), methane (CH4), ethane (C2H6), and acetylene (C2H2), which are the byproducts caused by internal faults [3]. The technique of dissolved gas analysis (DGA) is effective in detecting incipient fault of power transformers. To accelerate the convergence of artificial immune algorithm, normalized concentrations {H2/T, CH4/T, C2H6/T, C2H4/T, C2H2/T} are taken as the characteristic vector, where T represents the total gas.

The two main reasons of oil deterioration in operating transformers are thermal and electrical failures. Detectable faults in IEC of Publication 60599 by using dissolved gas analysis (DGA) are discharges of low energy (D1), discharges of high energy (D2), partial discharges (PD), thermal faults below 300°C (T1), thermal faults above 300°C (T2), and thermal faults above 700°C (T3).

In this study, 350 DGA samples from fault transformers are chosen as experimental data. These samples were divided into two parts: 213 samples were taken as training dataset and the rest of the 137 samples as test dataset.

Through analysis of transformer fault samples, it can be observed that the key gases of thermal faults are CH4 and C2H4, whereas, among the electrical faults, the key gases of low intensity discharges and high intensity discharges are C2H2 and H2; H2 is the principal gas of partial discharges. Thermal faults and electrical faults are relatively easy to distinguish. The fault type of thermal faults is closely related to the temperature of fault location; the boundaries are too obscure to distinguish fault type. The initial antibody number should be increased appropriately when training the immune network.

According to the above analysis, each class of training samples was clustered using K-means-CP algorithm. The obtained cluster centers were used as initial antibodies for the training of soAbNet. The training sample distribution and memory antibodies after training are shown as in Table 3. Here, affinity threshold was set as 0.1.

Table 3: Sample distribution and memory antibodies.

The affinities between test samples and memory antibodies are calculated, respectively. According to nearest neighbor rule, memory antibody with the highest affinity is selected to identify the type of antigen. Using the proposed transformer fault diagnosis method, the total diagnostic accuracy on 137 samples is 84.67%. Diagnosis results in detail are shown as in Table 4.

Table 4: Diagnosis results of test samples.

The diagnostic accuracy on the testing fault dataset using the proposed hybrid immune algorithm was compared with that using soAbNet, back-propagation neural network (BPNN), and IEC method. The diagnostic accuracy of each fault type is shown in Table 5. As shown in Table 5, the proposed immune algorithm is with higher diagnosis accuracy compared to soAbNet, BPNN, and IEC method.

Table 5: Diagnosis accuracy comparison among different methods (%).

5. Conclusions

This paper has presented a hybrid immune algorithm to improve the classification accuracy for transformer fault diagnosis, which is a hybridization of soAbNet and immune operator. Immune operator obtained vaccines from training dataset to provide initial antibodies for soAbNet. Moreover, immune operator accelerates convergence and enhances the quality of memory antibodies in soAbNet by immune selection.

The experimental results on four-class dataset show that the proposed algorithm can enhance the quality of memory antibodies. The experimental results on the other three datasets demonstrate the excellent classification performance of the proposed algorithm. The experimental results on real-world transformer fault dataset show that the proposed immune algorithm obtained very promising abilities in fault classification for power transformers. Furthermore, through the comparison with three other transformer diagnosis approaches, such as soAbNet, IEC method, and BPNN, the results highlight the superiority of the proposed approach in testing diagnosis accuracy.

It is worthwhile pointing that the affinity threshold in immune selection has an important effect on the performance of the proposed algorithm. The value of is selected by analyzing experimental results on training dataset in this paper. To enhance the solution, optimization algorithms can be used for parameter optimization, such as particle swarm optimization (PSO) and genetic algorithm (GA).

Finally, hybridization of two or more intelligent methods would improve the diagnosis reliability for incipient fault in oil-immersed transformers, which is helpful to practical engineering application.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.


This paper was supported by the Fundamental Research Funds for the Central Universities (no. 13MS69) and the Doctoral Scientific Research Foundation of Northeast Dianli University (no. BSJXM-201401), China.


  1. W. H. Tang and Q. H. Wu, Condition Monitoring and Assessment of Power Transformers Using Computational Intelligence, Springer, London, UK, 2011.
  2. Mineral Oil-Impregnated Electrical Equipment in Service—Guide to the Interpretation of Dissolved and Free Gases Analysis, IEC Publication, 2007.
  3. “Guide for the interpretation of gases generated in oil-immersed transformers,” Tech. Rep. IEEE Std C57.104-2008, 2009.
  4. C. E. Lin, J. M. Ling, and C. L. Huang, “An expert system for transformer fault diagnosis using dissolved gas analysis,” IEEE Transactions on Power Delivery, vol. 8, no. 1, pp. 231–238, 1993. View at Publisher · View at Google Scholar · View at Scopus
  5. K. Tomsovic, M. Tapper, and T. Ingvarsson, “A fuzzy information approach to integrating different transformer diagnostic methods,” IEEE Transactions on Power Delivery, vol. 8, no. 3, pp. 1638–1645, 1993. View at Publisher · View at Google Scholar · View at Scopus
  6. Y. Zhang, X. Ding, Y. Liu, and P. J. Griffin, “An artificial neural network approach to transformer fault diagnosis,” IEEE Transactions on Power Delivery, vol. 11, no. 4, pp. 1836–1841, 1996. View at Publisher · View at Google Scholar · View at Scopus
  7. W. S. Lin, C. P. Hung, and M. H. Wang, “CMAC_based fault diagnosis of power transformers,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN '02), pp. 986–991, May 2002. View at Scopus
  8. Y.-C. Huang, “A new data mining approach to dissolved gas analysis of oil-insulated power apparatus,” IEEE Transactions on Power Delivery, vol. 18, no. 4, pp. 1257–1261, 2003. View at Publisher · View at Google Scholar · View at Scopus
  9. W. Chen, C. Pan, Y. Yun, and Y. Liu, “Wavelet networks in power transformers diagnosis using dissolved gas analysis,” IEEE Transactions on Power Delivery, vol. 24, no. 1, pp. 187–194, 2009. View at Publisher · View at Google Scholar · View at Scopus
  10. H. B. Zheng, R. J. Liao, S. Grzybowski, and L. J. Yang, “Fault diagnosis of power transformers using multi-class least square support vector machines classifiers with particle swarm optimisation,” IET Electric Power Applications, vol. 5, no. 9, pp. 691–696, 2011. View at Publisher · View at Google Scholar · View at Scopus
  11. X. Hao and S. Cai-Xin, “Artificial immune network classification algorithm for fault diagnosis of power transformer,” IEEE Transactions on Power Delivery, vol. 22, no. 2, pp. 930–935, 2007. View at Publisher · View at Google Scholar · View at Scopus
  12. N. Liu, W. Gao, and K. Tan, “Fault diagnosis of power transformer using a combinatorial neural network,” Transactions of China Electrotechnical Society, vol. 18, no. 2, pp. 83–86, 2003. View at Google Scholar · View at Scopus
  13. L. Wu, Y. Zhu, and J. Yuan, “Novel method for transformer faults integrated diagnosis based on Bayesian network classifier,” Transactions of China Electrotechnical Society, vol. 20, no. 4, pp. 45–51, 2005. View at Google Scholar · View at Scopus
  14. J. Zhang, H. Zhou, and C. Xiang, “Application of super SAB ANN model for transformer fault diagnosis,” Transactions of China Electrotechnical Society, vol. 19, no. 7, pp. 49–58, 2004. View at Google Scholar · View at Scopus
  15. Y.-Q. Wang, F.-C. Lu, and H.-M. Li, “Synthetic fault diagnosis method of power transformer based on rough set theory and bayesian network,” Proceedings of the Chinese Society of Electrical Engineering, vol. 26, no. 8, pp. 137–141, 2006. View at Google Scholar · View at Scopus
  16. D. B. Zhang, Y. Xu, and Y. N. Wang, “Neural network ensemble method and its application in DGA fault diagnosis of power transformer on the basis of active diverse learning,” Proceedings of the Chinese Society of Electrical Engineering, vol. 30, no. 22, pp. 64–70, 2010. View at Google Scholar · View at Scopus
  17. M. Dong, D. K. Xu, M. H. Li, and Z. Yan, “Fault diagnosis model for power transformer based on statistical learning theory and dissolved gas analysis,” in Proceedings of the Conference Record of the IEEE International Symposium on Electrical Insulation, pp. 85–88, September 2004. View at Publisher · View at Google Scholar
  18. D. R. Morais and J. G. Rolim, “A hybrid tool for detection of incipient faults in transformers based on the dissolved gas analysis of insulating oil,” IEEE Transactions on Power Delivery, vol. 21, no. 2, pp. 673–680, 2006. View at Publisher · View at Google Scholar · View at Scopus
  19. A. Shintemirov, W. Tang, and Q. H. Wu, “Power transformer fault classification based on dissolved gas analysis by implementing bootstrap and genetic programming,” IEEE Transactions on Systems, Man and Cybernetics, vol. 39, no. 1, pp. 69–79, 2009. View at Publisher · View at Google Scholar · View at Scopus
  20. R. Naresh, V. Sharma, and M. Vashisth, “An integrated neural fuzzy approach for fault diagnosis of transformers,” IEEE Transactions on Power Delivery, vol. 23, no. 4, pp. 2017–2024, 2008. View at Publisher · View at Google Scholar · View at Scopus
  21. H. Du and S. Wang, “Data enriching based on art-artificial immune network,” Pattern Recognition and Artificial Intelligence, vol. 14, no. 4, pp. 401–405, 2001. View at Google Scholar · View at Scopus
  22. L. N. de Castro and J. Timmis, Artificial Immune Systems: A New Computational Intelligence Approach, Springer, London, UK, 2002.
  23. J. Timmis, M. Neal, and J. Hunt, “An artificial immune system for data analysis,” BioSystems, vol. 55, no. 1–3, pp. 143–150, 2000. View at Publisher · View at Google Scholar · View at Scopus
  24. A. Watkins, J. Timmis, and L. Boggess, “Artificial immune recognition system (AIRS): an immune-inspired supervised learning algorithm,” Genetic Programming and Evolvable Machines, vol. 5, no. 3, pp. 291–317, 2004. View at Publisher · View at Google Scholar · View at Scopus
  25. L. N. de Castro and J. Timmis, “An artificial immune network for multimodal function optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '02), pp. 699–704, May 2002. View at Publisher · View at Google Scholar · View at Scopus
  26. Z. Li, J. Yuan, and L. Zhang, “Fault diagnosis for power transformer based on the self-organization antibody net,” Transactions of China Electrotechnical Society, vol. 25, no. 10, pp. 200–206, 2010. View at Google Scholar · View at Scopus
  27. W. Lei and J. Licheng, “Immune evolutionary algorithm,” in Proceedings of the 3rd International Conference on Knowledge-Based Intelligent Information Engineering Systems (KES '99), pp. 99–102, September 1999. View at Scopus
  28. T. Calinski and J. Harabasz, “A dendrite method for cluster analysis,” Communications in Statistics, vol. 3, pp. 1–27, 1974. View at Google Scholar · View at MathSciNet
  29. R. Tibshirani, G. Walther, and T. Hastie, “Estimating the number of clusters in a data set via the gap statistic,” Journal of the Royal Statistical Society B: Statistical Methodology, vol. 63, no. 2, pp. 411–423, 2001. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  30. S. Dudoit and J. Fridlyand, “A prediction-based resampling method for estimating the number of clusters in a dataset,” Genome Biology, vol. 3, no. 7, pp. 1–21, 2002. View at Google Scholar · View at Scopus
  31. C. Ding and X. He, “K-nearest-neighbor consistency in data clustering: incorporating local information into global optimization,” in Proceedings of the ACM Symposium on Applied Computing, pp. 584–589, Nicosia, Cyprus, March 2004. View at Scopus
  32. T. K. Ho and E. M. Kleinberg, “Building projectable classifiers of arbitrary complexity,” in Proceedings of the 13th International Conference on Pattern Recognition (ICPR '96), pp. 880–885, August 1996. View at Publisher · View at Google Scholar · View at Scopus
  33. K. Bache and M. Lichman, UCI Machine Learning Repository, University of California, School of Information and Computer Science, Irvine, Calif, USA, 2013.