Advances in Artificial Neural Systems

Advances in Artificial Neural Systems / 2011 / Article

Research Article | Open Access

Volume 2011 |Article ID 532785 |

Garima Singh, Laxmi Srivastava, "Genetic Algorithm-Based Artificial Neural Network for Voltage Stability Assessment", Advances in Artificial Neural Systems, vol. 2011, Article ID 532785, 9 pages, 2011.

Genetic Algorithm-Based Artificial Neural Network for Voltage Stability Assessment

Academic Editor: Ping Feng Pai
Received03 May 2011
Accepted16 Jun 2011
Published31 Jul 2011


With the emerging trend of restructuring in the electric power industry, many transmission lines have been forced to operate at almost their full capacities worldwide. Due to this, more incidents of voltage instability and collapse are being observed throughout the world leading to major system breakdowns. To avoid these undesirable incidents, a fast and accurate estimation of voltage stability margin is required. In this paper, genetic algorithm based back propagation neural network (GABPNN) has been proposed for voltage stability margin estimation which is an indication of the power system's proximity to voltage collapse. The proposed approach utilizes a hybrid algorithm that integrates genetic algorithm and the back propagation neural network. The proposed algorithm aims to combine the capacity of GAs in avoiding local minima and at the same time fast execution of the BP algorithm. Input features for GABPNN are selected on the basis of angular distance-based clustering technique. The performance of the proposed GABPNN approach has been compared with the most commonly used gradient based BP neural network by estimating the voltage stability margin at different loading conditions in 6-bus and IEEE 30-bus system. GA based neural network learns faster, at the same time it provides more accurate voltage stability margin estimation as compared to that based on BP algorithm. It is found to be suitable for online applications in energy management systems.

1. Introduction

Voltage stability is concerned with the ability of the power system to maintain acceptable voltages at all the system buses under normal conditions as well as after being subjected to a disturbance. Thus, the analysis of voltage stability deals with finding the voltage levels at all buses in the system under different loading conditions to ascertain the stability limit and margin. A power system enters a state of voltage instability when a disturbance, increase in load demand or change in system conditions causes a progressive and uncontrollable decline of bus voltages.

The main factor causing voltage instability is the inability of the power system to meet the reactive power demand. In most of the cases, voltage profiles show no abnormality prior to undergoing voltage collapse because of the load variation. Voltage stability margin (VSM) is a static voltage stability index which is used to quantify how “close” a particular operating point is to the point of voltage collapse [1]. Thus, VSM may be used to estimate the steady-state voltage stability limit of a power system. Knowledge of the voltage stability margin is of vital importance to utilities in order to operate their system with appropriate security and reliability

During the last few years, several methodologies for detecting the voltage collapse points (saddle-node bifurcations) in power systems using steady-state analysis techniques have been modified and applied for the determination of analyzing voltage stability of power systems for example PV and QV curves, sensitivity-based indices [2] and continuation power flow methods [3, 4]. Other methods, such as bifurcation theory [5], energy function [6], singular value decomposition [7], and so forth, have been also reported in the literature.

These analytical methods involve considerable computational effort and require significantly large computational time and, hence, cannot be used directly for online monitoring and initiation of preventive control actions to enhance system voltage stability. For online applications, there is a need for quick detection of the potentially dangerous situations of voltage instability so that necessary actions may be taken to avoid the occurrence of voltage collapse in a power system.

Recently, artificial neural networks (ANNs) have been proposed for voltage stability evaluation [1, 816] as they have the ability to properly classify a highly nonlinear relationship, and, once trained, they can classify new data much faster than it would be possible by solving the model analytically. However, most of the published work in the area of voltage stability employed either multi-layer perceptron networks trained by back propagation algorithm [814]. In reference [13], the energy function-based voltage stability indicator is predicted using the multi-layer perception (MLP) with a second-order learning rule and the radial basis function (RBF) network. The input to the neural network consists of real and reactive power injections at all load buses in the system for a particular loading condition, while the output of the network is the energy margin. In [14], an approach based on artificial feed-forward neural network (FFNN) is presented for assessing power system voltage stability. The approach uses real and reactive power, as well as voltage vectors for generators and load buses to train the neural net (NN). The input properties of the NN are generated from offline training data with various simulated loading conditions using a conventional voltage stability algorithm-based on the L-index. In reference [1] parallel self-organizing hierarchical neural network is developed for assessing power system voltage stability while in [15, 16], Kohonen’s self-organizing feature map (SOFM) has been proposed for quantifying stability margins.

In typical power systems, there are voluminous amount of input data. Then, the success of ANN applications also depends on the systematic approach of selecting highly important features which will result in a compact and efficient ANN. Different feature reduction methods for voltage stability assessment are compared in [17]. Feature reduction is crucial for the success of ANN application, although each has its own merit and demerit. Feature selection based on clustering technique can identify important parameters directly measurable from the power system.

Voltage instability is, in general, caused by either of two types of system disturbances: increase in load demand and contingencies. In the present paper, voltage instability due to increase in load demand is considered. A genetic algorithm-based back propagation neural network [1823] has been proposed for voltage stability margin estimation which evaluates system stability from static viewpoint. Input features for GABPNN are selected by applying angular distance-based clustering technique in the real and reactive loads [24, 25].

These conventional methods of voltage stability assessment are computationally intensive and data sensitive. On the other hand, artificial neural network-based approach is fast and provides result even with partially missing/noisy data. Back propagation (BP) searches on the error surface by means of the gradient descent technique in order to minimize the error. It is therefore likely to get struck in a local minimum [18]. On the other hand in GABPNN, there exist genetic algorithms (GA) which are adaptive search and optimization algorithms that mimic the principles of natural genetics. Genetic algorithms are quite different from traditional search and optimization techniques used in engineering design problems but at the same time exhibit simplicity, ease of operation, minimal requirements, and global perspective.

2. Genetic Algorithm-Based BP Neural Network

The idea to hybridize the two approaches, namely, GA and BPN follows naturally. Rajasekaran and Pai [20] used GAs to guide back propagation network in finding the necessary connections instead of full connections in order to enhance the speed of training. The general schematic of the genetic algorithm (GA) is considered to be a stochastic heuristic (or metaheuristic) method. Genetic algorithms are inspired by adaptive and evolutionary mechanisms of live organisms.

Genetic algorithm is an adaptive search technique used for solving mathematical problems and engineering optimization problems that emulates Darwin’s evolutionary theory that is fittest is likely to survive. Genetic algorithm attempts to find a good (or best) solution to the problem by genetically breeding a population of individuals over a series of generations. In genetic algorithm, each individual in the population represents a candidate solution to the given problem. The GA transforms a population (set) of individuals, each with an associated fitness value, into a new generation of the population using reproduction, crossover and mutation. Core of the GA is genetic recombination of strings. Generally, a population of strings is randomly generated at the beginning of the process.

An important characteristic of GA is that global feature of search is related to the diversity of the initial population: the more diverse the population, the more global the search. From the initial population, selection strategy based on fitness proportion is adopted to select individuals in current population. Higher selective pressure often leads to the loss of diversity in the population, which causes premature convergence but at the same time improves convergence speed. Therefore, a balance is required between population diversity and convergence speed for obtaining the good performance of GA. Then reproduction, cross-over, and mutation operators are randomly applied to produce next generation population until genetic stopping condition is satisfied.

When the GA is correctly implemented for solving any problem, the population evolves over successive iterations with the fitness value increasing towards global optimum. Several features of GAs like no dependency on gradient information, less likely to be trapped in local minima, ability to deal with the problems where no explicit/exact objective function is available, ability to deal with the concave objective function-based optimization problems, and make them much more robust than many other search algorithms. Moreover GAs are much superior to conventional search and optimization techniques in high-dimensional problem space due to their inherent parallelism and directed stochastic search implemented by recombination operators. GA operates[18] through a simple cycle of three stages, which are described in Table 1.

(A) Generate randomly an initial population of individuals.
(B) Carry out the following substeps iteratively for each generation until a termination condition is fulfilled.
(i) Evaluate fitness of each individual to check its ability to solve the specific problem and save the best individual of all preceding population.
(ii) Select pair of individuals to be parents for reproduction on the basis of their fitness.
(iii) Generate offsprings from parents by implementing genetic search operators such as cross-over/mutation. Add them to the population.

As shown in Table 1, in substep (ii), selection is based on fitness, that is, the fitter an individual the greater the chance for this individual to get selected for reproduction and contribute offspring for the next generation. Cross-over operator takes two chromosomes and swaps part of their genetic information to produce new chromosomes. Mutation is implemented by occasionally altering a random bit in a string before the offsprings are inserted into the new population. The performance of GA is achieved by having a balanced combination of three control parameters. These control parameters are crossover probability, mutation probability, and population size. Some important observations related to GA are the following:(i)Increasing the crossover probability increases the recombination of building blocks. But it also increases the disruption of good strings.(ii)Increasing the mutation probability tends to transform the genetic search into random search, but it also helps reintroduce lost genetic material.(iii)Increasing the population size increases the diversity and reduces the probability of premature convergence to a local optimum, but it also increases the time required for the population to converge to the optimal region in the search space.

2.1. Weight Optimization Using GA for ANN Training

Artificial neural networks and genetic algorithms are both abstractions of natural processes. They are formulated into a computational model so that the learning power of neural networks and adaptive capabilities of evolutionary processes can be combined [18]. Genetic algorithms can help to determine optimized neural network interconnection weights, as well as, to provide faster mechanism for training of the neural network. Training a given neural network generally means to determine an optimal set of connection weights. This is formulated as the minimization of some network error functions, over the training data set, by iteratively adjusting the weights. The mean square error between the target and actual output averaged over all output nodes serves as a good estimate of the fitness of the network configuration corresponding to the current input. Conventionally a back-propagation neural network (BPNN) updates its weights through a gradient descent technique with backward error propagation. This gradient search technique sometimes gets stuck into local minima. Gas, on the other hand, though not guaranteed to find global optimum solution, have been found to be good at finding “acceptably good” solutions “acceptably quickly” [18].

The GA-based weight optimization during training of an ANN follows two steps. The first step is encoding strings for the representation of connection weights. The second step is the evolutionary process simulated by GA, in which search operators have to be implemented in conjunction with the representation scheme. The evolution stops when the population has converged. A population is said to have converged when 95% of the individuals constituting the population share the same fitness value [20]. The whole process for neural network training using a genetic algorithm is shown in Table 2.

(i) Decode each individual in the current population into a set of connection weights and construct a corresponding ANN with the weights.
(ii) Evaluate the ANN by computing its total mean square error between actual and target outputs.
(iii) Determine fitness of individual as inverse of error. The higher is the error, the lower is the fitness.
(iv) Store the weights for mating pool formation.
(v) Implement search operators such as cross-over/mutation to parents to generate offsprings.
(vi) Calculate fitness for new population.
(vii) Repeat steps (iii) to (vi) until the solution converge.
(viii) Extract optimized weights.

Obstacles to the success of GA in evolving the weights for a fixed network structure include the manner in which weights are encoded to the chromosomes and the definition of the “fitness function” that allows the preferential reproduction of good offsprings and prevents premature convergence to a poor solution [18, 20]. Although GA offers an attractive way to optimize ANN weights, it is relatively slow in local fine tuning in comparison to gradient methods. A desirable approach in this case would be to integrate a local gradient search with the GA.

2.2. Weight Extraction

To determine the fitness value for each of the chromosomes, we extract weight from each of chromosomes.

Let represent a chromosome and represent the kth gene in the chromosome. The actual weight is given by

2.3. Feature Selection

In any neural network application, if a large number of input variables are used, the number of interconnection weights will increase and the training of neural network will be extremely slow. To overcome this problem, those variables are selected as input to a neural network that has significant effect on its output, that is, on voltage stability margin. Performance of any neural network mainly depends upon the input features selected for its training. It is essential to reduce the number of inputs to a neural network and to select its optimum number.

To select the input features, angular distance base-clustering method is used. The basic principle for clustering is to group the total system variables into clusters such that the variables in a cluster have similar characteristics and then pick out one representative variable from a cluster as the feature for that cluster. This will reduce the number of system variables from to . For the purpose of feature selection using the clustering technique, let training patterns are created.

With the state vector at hand, From the matrix

It is observed that row of matrix contains the value for -system variables at operating point while column of matrix consists of the variable in the training patterns. Define the column vector . Then the variables can be clustered based on these vectors .

Those system variables with similar vector will be grouped in a cluster. As shown in Figure 1, two vectors and which are similar to each other will have a small angle between them. We can, therefore, put a group of these similar vectors in a cluster. To do this, define the cosine value of the angle between two vector and as

This cosine value can be used to evaluate the degree of similarity between two vectors. If is greater than a specified threshold , the two vectors and are regarded as two similar vectors and are put in the same cluster. Details of clustering algorithm are described as follows.

Step 1. Let the system variable belong to cluster 1 and the cluster vector for cluster 1 be equal to column vector , that is, Set initial count , .

Step 2. Increase j by one.

Step 3. Compute the cosine value between vector and Let .
If (a specified threshold), the present vector is far from any existing cluster vectors, and we have to create a new cluster for system variable .

Step 4. If , the present vector is close to cluster vector of cluster , and system variable should be assigned . Proceed to Step 4.

If vector has not been presented before, let system variable be grouped in cluster , then update the cluster vector as follows: where is the number of system variable in cluster . Go to Step 6. If the vector has been presented before and variable has been grouped in cluster , go to Step 6.

If vector has been presented before and variable has been grouped in a different cluster , move system variable to cluster and execute the update formulae where is the number of system variable in cluster vector . In this case, this is a move in cluster elements. Go to Step 6.

Step 5. Create a new cluster for system variable with the cluster vector where .
Increase G by one and go to Step 6.

Step 6. If , proceed to Step 7; otherwise, go to Step 2.

Step 7. If there is any move in cluster elements in the preceding iterations, reset to zero and proceed to Step 2. Otherwise, go to Step 8.

Step 8. For each cluster , find a system variable whose column vector is closest to the cluster vector , that is, For any in cluster g, let system variable be the feature for cluster . Store the system variables and feature variable for each cluster and stop.

2.4. Data Normalization

During training of a neural network, the higher valued input variables may tend to suppress the influence of smaller ones. To overcome this problem, the neural networks are trained with normalized input data, leaving the network to learn weights associated with the connections emanating from these inputs. The raw data are scaled in the range 0.1–0.9 for use by neural networks to minimize the effect of magnitude between input [18]. In case of output variable, if it assumes values close to unity (≥1) or (0), it causes difficulty in training as the value unity or zero are practically never realized by the activation or threshold function. A way to overcome this difficulty is to normalize the variables (×) to keep its values between some suitable range (say 0.1 and 0.9). In the present application, each input parameter (×) is normalized as ×n before being applied to the neural network according to the following equation: where and are the maximum and minimum value of data parameter . The input data are normalized between 0.9 and 0.1. Similarly, output data (VSM) are normalized between 0.9 and 0.1. During testing phase, the output of the neural network is demoralized to obtain the actual value of voltage stability margin.

3. Solution Algorithm

Sequential steps (flowchart) for developing GABPNN proposed for voltage stability margin estimation are illustrated in Figure 2. The algorithm for VSM estimation has been summarized as follows.

Step 1. Generate a large number of load patterns by perturbing the loads at all the buses in wide range randomly.

Step 2. Normalize input data as selected from the angular distance base-clustering method and the output, that is, voltage stability margin to train the GABPNN.

Step 3. Set numbers of generations for genetic optimization of weights.

Step 4. Initialize structure for the neural network, that is, input-hidden-output nodes for determining the number of weights.

Step 5. Set generation count as .

Step 6. Generate randomly the initial population of real coded chromosomes , where is population size.

Step 7. Extract weights for each of the population .

Step 8. Calculate the fitness value for each individual in the population as,

Step 9. Get the mating pool ready by replacing worst fit individuals with high-fit individuals.

Step 10. Using cross-over mechanism, reproduce offsprings from the parent chromosomes.

Step 11. Next generation population is achieved. Increase generation count by 1, that is,

Step 12. Check, if , then go to Step 7, otherwise go to next step.

Step 13. Training is complete. Extract optimized weights from converged population .

Step 14. Test the developed GABPNN for unseen load patterns.

In the proposed GABPNN, GA performs the weight adaptation for acquiring the minimized error during the training. Before executing certain task, GA requires several parameters to be devised for its proper functioning. Some of them are gene encoding, population initialization, selection, reproduction, fitness evaluation, and so forth. The basic computing elements in GAs are genes, which are joined together to form a string of values referred to as a chromosome. Genes encoding is required to represent weights. To carry out efficient weight optimization in ANN, it is important to have a proper encoding scheme.

There is no definite view in the literature about suitability of encoding schemes for chromosomes. Too complicated scheme provides high flexibility in problem representation but may reduce GA’s efficiency due to complicated operations. On the other hand, too simple representation may suffer from slow or premature convergence. This requires a careful selection of an encoding scheme such that GA operations are not compromised but still provides enough flexibility to support dynamic weight adaptation. In the present work, real value coding is adopted for gene encoding.

Size of the population of individuals (chromosomes) is generated randomly to start the genetic search procedure. The size of population depends upon number of weights to be optimized multiplied by gene length. The number of weights to be optimized is determined from neural network configuration. In this work, artificial neural network architecture is assumed to be fixed. If the artificial neural network architecture is x-y-z, that is, x input neurons, y hidden neurons and z output neurons, then where d = gene length. Choosing a population size of , an initial population of chromosomes is to be randomly generated.

Thus, population size depends upon the number of digits to be used to represent each weight. An appropriate gene length is required for an adequate population size. This way selection of digits to represent a weight is of great importance. Too few digits may result in poor convergence, while a large number of digits per weight will lead to slow convergence due to very long chromosomal string. In the present work, each weight is encoded as a fixed 5-digit string (i.e., ), representing weights in the range [−5,5]. After deciding the encoding scheme and generating an initial population of chromosomes, the program for GABPNN was implemented, as explained above.

Convergence is the progress towards increasing uniformity in fitness values, as each time lowest fitness is replaced with maximum fitness value. Fitness function is taken as inverse of root mean square error (RMS) function. For the converged population, that is, group of individuals comprising minimum RMS, final optimized weights are extracted and decoded. These optimized weights belong to the trained GABPNN, which is ready for testing on unseen load patterns.

4. System Studies

The GA-based back propagation neural network approach has been implemented for voltage stability margin estimation for standard 6-bus system [18] and IEEE 30-bus system [26]. Though the GABPNN-based approach performed well in both the test systems, but due to limited space, the results of only one test system, that is, IEEE 30-bus system are given in this paper. The IEEE 30-bus system consists of 30 buses and 41 lines. The GABPNN is trained in order to estimate the voltage stability margin under changing load in normal system operation. For voltage stability margin estimation, 250 load patterns were generated by varying loads randomly at all the buses in the range of 50 to 160% of their base case values and utilizing the corresponding voltage stability margin value as obtained from continuation power flow method (UWPFLOW) [27]. As voltage stability problem usually occurs due to loading condition of a power system, the real and reactive power injections are considered as possible inputs for the GA-based neural network. Input features are selected by applying angular distance-based clustering technique. Using this feature selection method, 8 no. of power injections (P2, P8, P9, P11, P12, Q8, Q11, Q12) were selected with the threshold as 0.936. In addition to these features, total real and reactive loads of the power system were selected as input features making the total number of input variable equal to 10.

Out of 250 patterns, 200 patterns are selected randomly for training and the remaining 50 for testing the performance of the trained GABPNN model. During training, it has been found that the number of hidden nodes has affected the convergence rate by increasing or decreasing the complexity of the neural network architecture. Hence, hidden nodes are selected on the “trial and error” basis for obtaining fitness convergence in the minimum number of generations with high convergence rate. In VSM estimation problem, the optimum size of the GABPNN has been found to be 10-4-1. The optimal training for GABPNN was achieved in 15 iterations only.

During testing phase, the 50 testing patterns were tested for evaluating the performance of the trained GABPNN. The testing results of all the 30 patterns are shown in Table 3 and in Figure 3. As can be observed from the table and figure, the trained GABPNN provided good results during testing phase.

TPVoltage stability marginError (p.u.)Error (% age)TPVoltage stability marginError (p.u.)Error (% age)


TP: Testing Pattern.
4.1. Comparison of GABPNN and BPNN

As the BP neural network is the most popular ANN model and has been implemented in almost every area of engineering, in this paper the performance of GABPNN model has been compared with BPNN model. To compare the effectiveness of the proposed GABPNN approach, a BP model with the same structure that is 10-4-1 was trained for 2500 iterations. The testing results of the trained BPNN are given in Table 4. As can be observed from Table 4, the testing performance of BP model was not satisfactory. This model provided maximum error as 15.71% with rms error of 0.113 pu. On the other hand, the developed GABPNN provided the maximum error of 3.51% and rms error 0.021 pu. However, the training time for both the models was almost same (approximately 13.5 seconds).

TPVoltage stability marginError (p.u.)Error (% age)TPVoltage stability marginError (p.u.)Error (% age)
ActualBy BPNNActualBy BPNN

251.557841.56032−0.00250.15901261.822541.54385 0.2786915.2913

5. Conclusion

In this paper, a hybrid intelligent approach involving genetic algorithm for artificial neural network development has been proposed for voltage stability margin estimation in power system. The fast and accurate estimation of VSM has been considered as an effective way for assessing the stability of a power system from viewpoint of voltage. Implementation of GA makes it possible to achieve effective input-output mapping in artificial neural network with considerable speed-up in its training.

The proposed GABPNN approach sums up the goodness of evolutionary computing and artificial neural networks both. The value of this hybrid approach is that GA requires no gradient information so less susceptible than back-propagation to local variations in the error surface. Another advantageous aspect is that they operate in a population of possible solution candidates in parallel, instead of starting with a single candidate and iteratively operate on it using some sort of heuristics. The proposed approach provides acceptably good generalization ability during testing and found computationally efficient in VSM estimation. Successful application of GABPNN establishes the suitability of the proposed ANN model for online assessment of voltage stability.


The authors sincerely acknowledge the financial support provided by Department of Science and Technology (D.S.T), New Delhi, India under Research Grant no F.No.SR/S3/EECE/0064/2009 dated 22-01-10 and Director, Madhav Institute of Technology and Science, Gwalior, India for carrying out this research work.


  1. L. Srivastava, S. N. Singh, and J. Sharma, “Estimation of loadability margin using parallel self-organizing hierarchical neural network,” Computers and Electrical Engineering, vol. 26, no. 2, pp. 151–167, 2000. View at: Publisher Site | Google Scholar
  2. P. Kundur, Power System Stability and Control, McGraw-Hill, New York, NY, USA, 1994.
  3. V. Ajjarapu and C. Christy, “The continuation power flow: a tool for steady state voltage stability analysis,” IEEE Transactions on Power Systems, vol. 7, no. 1, pp. 416–423, 1992. View at: Publisher Site | Google Scholar
  4. C. A. Canizares and F. L. Alvarado, “Point of collapse and continuation methods for large AC/DC systems,” IEEE Transactions on Power Systems, vol. 8, no. 1, pp. 1–8, 1993. View at: Publisher Site | Google Scholar
  5. C. A. Canizares, “On bifurcations, voltage collapse and load modeling,” IEEE Transactions on Power Systems, vol. 10, no. 1, pp. 512–522, 1995. View at: Publisher Site | Google Scholar
  6. T. J. Overbye and C. L. DeMarco, “Improved techniques for power system voltage stability assessment using energy methods,” IEEE Transactions on Power Systems, vol. 6, no. 4, pp. 1446–1452, 1991. View at: Publisher Site | Google Scholar
  7. P. A. Löf, T. Smed, G. Andersson, and D. J. Hill, “Fast calculation of a voltage stability index,” IEEE Transactions on Power Systems, vol. 7, no. 1, pp. 54–64, 1992. View at: Publisher Site | Google Scholar
  8. B. Jeyasurya, “Artificial neural networks for power system steady-state voltage instability evaluation,” Electric Power Systems Research, vol. 29, no. 2, pp. 85–90, 1994. View at: Google Scholar
  9. D. Salatino, R. Sbrizzai, M. Trovato, and M. La Scala, “Online voltage stability assessment of load centers by using neural networks,” Electric Power Systems Research, vol. 32, no. 3, pp. 165–173, 1995. View at: Google Scholar
  10. A. A. El-Keib and X. Ma, “Application of artificial neural networks in voltage stability assessment,” IEEE Transactions on Power Systems, vol. 10, no. 4, pp. 1890–1896, 1995. View at: Publisher Site | Google Scholar
  11. H. P. Schmidt, “Application of artificial neural networks to the dynamic analysis of the voltage stability problem,” IEE Proceedings: Generation, Transmission & Distribution, vol. 144, pp. 371–376, 1997. View at: Google Scholar
  12. A. Mohamed and G. B. Jasmon, “Neural network approach to dynamic voltage stability prediction,” Electric Machines and Power Systems, vol. 25, no. 5, pp. 509–523, 1996. View at: Google Scholar
  13. V. R. Dinavahi and S. C. Srivastava, “ANN based voltage stability margin prediction,” in Proceedings of the IEEE Power Engineering Society Summer Meeting, vol. 2, pp. 1275–1279, Vancouver, Canada, July 2001. View at: Google Scholar
  14. S. Kamalasadan, D. Thukaram, and A. K. Srivastava, “A new intelligent algorithm for online voltage stability assessment and monitoring,” International Journal of Electrical Power & Energy Systems, vol. 31, no. 2-3, pp. 100–110, 2009. View at: Publisher Site | Google Scholar
  15. S. Chauhan and M. P. Dave, “Kohonen neural network classifier for voltage collapse margin estimation,” Electric Machines and Power Systems, vol. 25, no. 6, pp. 607–619, 1996. View at: Google Scholar
  16. Y. H. Song, H. B. Wan, and A. T. Johns, “Kohonen neural network based approach to voltage weak buses/areas identification,” IEE Proceedings: Generation, Transmission & Distribution, vol. 144, pp. 340–344, 1997. View at: Google Scholar
  17. W. Nakawiro and I. Erlich, “Online voltage stability monitoring using artificial neural network,” in Proceedings of the 3rd International Conference on Deregulation and Restructuring and Power Technologies (DRPT '08), pp. 941–947, Nanjing, China, April 2008. View at: Publisher Site | Google Scholar
  18. S. N. Pandey, S. Tapaswi, and L. Srivastava, “Integrated evolutionary neural network approach with distributed computing for congestion management,” Applied Soft Computing Journal, vol. 10, no. 1, pp. 251–260, 2010. View at: Publisher Site | Google Scholar
  19. P. P. Palmes, T. Hayasaka, and S. Usui, “Mutation-based genetic neural network,” IEEE Transactions on Neural Networks, vol. 16, no. 3, pp. 587–600, 2005. View at: Publisher Site | Google Scholar
  20. S. Rajasekaran and G. A. V. Pai, Neural Networks, Fuzzy Logic and Genetic Algorithms—Synthesis and Applications, Prentice-Hall Press, New Delhi, India, 2006.
  21. P. Ramasubramanian and A. Kannan, “A genetic-algorithm based neural network short-term forecasting framework for database intrusion prediction system,” Soft Computing, vol. 10, no. 8, pp. 699–714, 2006. View at: Publisher Site | Google Scholar
  22. S. K. Oh and W. Pedrycz, “Multi-layer self-organizing polynomial neural networks and their development with the use of genetic algorithms,” Journal of the Franklin Institute, vol. 343, no. 2, pp. 125–136, 2006. View at: Publisher Site | Google Scholar
  23. S. K. Oh, W. Pedrycz, and S. B. Roh, “Genetically optimized hybrid fuzzy set-based polynomial neural networks,” Journal of the Franklin Institute, vol. 348, pp. 415–425, 2011. View at: Publisher Site | Google Scholar
  24. Y.-Y. Hsu, C.-R. Chen, and C.-C. Su, “Analysis of electromechanical modes using an artificial neural network,” IEE Proceedings: Generation, Transmission & Distribution, vol. 141, no. 3, pp. 198–204, 1994. View at: Publisher Site | Google Scholar
  25. S. Sharma and L. Srivastava, “Prediction of transmission line overloading using intelligent technique,” Applied Soft Computing Journal, vol. 8, no. 1, pp. 626–633, 2008. View at: Publisher Site | Google Scholar
  26. L. L. Freris and A. M. Sasson, “Investigation of the load-flow problem,” Proceedings of the Institution of Electrical Engineers, vol. 115, no. 10, pp. 1459–1470, 1968. View at: Publisher Site | Google Scholar
  27. C. A. Canizares and F. L. Alvarado, “UWPFLOW Program,” University of Waterloo, 2000, View at: Google Scholar

Copyright © 2011 Garima Singh and Laxmi Srivastava. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.