Research Article  Open Access
On the Variability of Neural Network Classification Measures in the Protein Secondary Structure Prediction Problem
Abstract
We revisit the protein secondary structure prediction problem using linear and backpropagation neural network architectures commonly applied in the literature. In this context, neural network mappings are constructed between protein training set sequences and their assigned structure classes in order to analyze the class membership of test data and associated measures of significance. We present numerical results demonstrating that classifier performance measures can vary significantly depending upon the classifier architecture and the structure class encoding technique. Furthermore, an analytic formulation is introduced in order to substantiate the observed numerical data. Finally, we analyze and discuss the ability of the neural network to accurately model fundamental attributes of protein secondary structure.
1. Introduction
The protein secondary structure prediction problem can be phrased as a supervised pattern recognition problem [1–5] for which training data is readily available from reliable databases such as the Protein Data Bank (PDB) or CB513 [6]. Based upon training examples, subsequences derived from primary sequences are encoded based upon a discrete set of classes. For instance, three class encodings are commonly applied in the literature in order to numerically represent the secondary structure set (alpha helix , beta sheet , coil ) [7–11]. By applying a pattern recognition approach, subsequences of unknown classification can then be tested to determine the structure class to which they belong. Phrased in this way, backpropagation neural networks [7, 12–14], and variations on the neural network theme [8, 10, 11, 15–18] have been applied to the secondary structure prediction problem with varied success. Furthermore, many tools currently applying hybrid methodologies such as PredictProtein [19, 20], JPRED [8, 17, 21], SCRATCH [22, 23] and PSIPRED [24, 25] rely on the neural network paradigm as part of their prediction scheme.
One of the main reasons for applying the neural network approach in the first place is that they tend be good universal approximators [26–30] and, theoretically, have the potential to create secondary structure models. In other words, after a given network architecture has been chosen and presented with a robust set of examples, the optimal parameters associated with the trained network, in principle, define an explicit function that can map a given protein sequence to its associated secondary structure. If the structure predicted by the network function is generally correct and consistent for an arbitrary input sequence not contained in the training set, one must be left to conclude that the neural network has accurately modeled some fundamental set of attributes that define the properties of protein secondary structure. Under these circumstances, one should then be able to extract information from the trained neural network model parameters; thus, leading to a solution to the secondary structure prediction problem as well as a parametric understanding of the underlying basis for secondary structure.
The purpose of this work is to revisit the application of neural networks to the protein secondary structure prediction problem. In this setting, we consider the commonly encountered case where three structure classes (alpha helix , beta sheet , and coil ) are used to classify a given protein subsequence. Given the same set of input training sequences, we demonstrate that, for the backpropagation neural network architecture, classification results and associated confidence measures can vary when two equally valid encoding schemes are employed to numerically represent the three structure classes (i.e., the “target encoding scheme”). Such a result goes against the intuition that the physical nature of the secondary structure property should be independent of the target encoding scheme chosen.
The contribution of this work is not to demonstrate improvements over existing techniques. The hybrid techniques outlined above have been demonstrated to outperform neural networks when used alone. Instead, we focus our attention on the ability of the neural network modelbased approach to accurately characterize fundamental attributes of protein secondary structure given that certain models presented within this work are demonstrated to yield variable results. Specifically, in this work, we present (1)numerical results demonstrating how secondary structure classification results can vary as function of classifier architecture and parameter choices;(2)an analytic formulation in order to explain under what circumstances classification variability can arise; (3)an outline of specific challenges associated with the neural network modelbased approach outlined above.
The conclusions reported here are relevant because they bring into discussion a body of literature that has purported to offer a viable path to the solution of the secondary structure prediction problem. Section 3 describes the methods applied in this work examine the total number that retained their classification using. In particular, this section provides details concerning the encoding of the protein sequence data (Section 3.1), the encoding of the structure classes (Section 3.2) as well as the neural network architectures (Sections 3.33.4), and the classifier performance measures (Section 3.5) applied in this work. Section 4 then presents results from numerical secondary structure classification experiments. Section 5 presents an analytic formulation for the linear network and the backpropagation network described in Section 3 in order to explain the numerical results given in Section 4.
2. Notation for the Supervised Classification Problem
In the supervised classification problem [1, 2], it is assumed that a training set consists of training pairs: where are dimensional input column vectors and are dimensional output column vectors. The goal of the supervised classifier approach is to ensure that the desired response to a given input vector of dimension from the training set is the dimensional output vector . Furthermore, when the training data can be partitioned into distinct classes, a set of target dimensional column vectors are chosen to encode (i.e., mathematically represent) each class for . Under these circumstances, each output training vector is derived from the set . Based upon this discussion, we summarize the use of following symbols: (i) is the dimension of a classifier input vector; (ii) is the dimension of a classifier output vector; (iii) is the number of discrete classes for the classification problem; (iv) is the number of training pairs for the supervised classification problem.
3. Methods
In order to apply the neural network paradigm, two numerical issues must be addressed. First, since the input data comes in the form of an amino acid sequence, Section 3.1 discusses a simple encoding scheme for converting the amino acid alphabet into a usable numerical form. Second, for this work, our secondary structure target alphabet consists of elements from the set . Hence, an encoding scheme must also be chosen for representing the neural network classifier output. Section 3.2 discusses two approaches to encoding the output in fine detail because it is critical to the main point of this paper. Specifically, we choose two different target vector encoding schemes that can be related by a simple mathematical relationship. Such an approach will allow us to compare classifier performance measures based upon the target vector encoding; in addition, it will facilitate the analytic formulation presented in Section 5. Finally, Sections 3.3–3.5 review the neural network architectures and the specific classifier performance measures employed in this work. Section 6 then concludes with some final observations regarding the neural network modelbased approach to the protein secondary structure prediction problem.
3.1. Encoding of Protein Sequence Input Data
For the numerical experiments, the training set was constructed using one hundred protein sequences randomly chosen from the CB513 database [6] available through the JPRED secondary structure prediction engine [21]. Furthermore, we employ a moving window of length 17 to each protein sequence where, in order to avoid protein terminal effects, the first and last 50 amino acids are omitted from the analysis. The secondary structure classification of the central residue is then assigned to each window of 17 amino acids. For the one hundred sequences analyzed, a total of 12000 windows of length 17 were extracted. The window size value of 17 was chosen based upon the assumption that the eight closest neighboring residues will have the greatest influence on the secondary structure conformation of the central residue. This assumption is consistent with similar approaches reported in the literature [7, 12–14].
To encode the input amino acid sequences of length 17, we employ sparse orthogonal encoding [31] which maps symbols from a given sequence alphabet onto a set of orthogonal vectors. Specifically, for an alphabet containing symbols, a unique dimensional unit vector is assigned to each symbol; furthermore, the th unit vector is one at the th position and is zero at all other positions. Hence, if all training sequences and unknown test sequences are of uniform length , an encoded input vector will be of dimension where . In our case, and ; hence, the dimension of any given input vector is .
The above input vector encoding technique is commonly applied in the bioinformatics and secondary structure prediction literature [7, 15]. While many different and superior approaches to this phase of the machine learning problem have been suggested [3–5], we have chosen orthogonal encoding because of its simplicity and the fact that the results of this work do not depend upon the input encoding scheme. Instead, our work specifically focuses on potential neural network classifier variabilities induced by choice of the target vector encoding scheme.
3.2. Target Vector Encoding
Analytically characterizing the invariance of classifier performance measures clearly involves first establishing a relationship between different sets of target vectors and . As a means of making the invariance formulation presented in this paper more tractable, we assume that two alternative sets of target vectors can be related via an affine transformation involving a translation, , a rigid rotation, , where is an orthogonal matrix and a scale factor, , where is a matrix of translation column vectors applied to each target vector. Many target vector choices regularly applied in the literature can be related via the transformation in (2). For instance, two equally valid and commonly applied encoding schemes for the three class problem are orthogonal encoding [31] where and where class encodings are chosen on the vertices of a triangle in a twodimensional plane [14]. It turns out that (4) and (5) can be phrased in terms of (2) [32]; hence, the numerical results presented in this work will apply this set of encodings. More precisely, the secondary structure classification associated with a given input vector is encoded using (4) and (5) (hence, ). The set of target vectors is derived from (4) and the set of target vectors is derived from (5). Both the linear and the backpropagation networks are tested first by training using and then comparing classifier performance with their counterparts trained using . In all numerical experiments, MATLAB has been used for simulating and testing these networks.
3.3. The Linear Network
When the supervised classifier model in (1) assumes an affine relationship between the input and output data sets (as in the case of multiple linear regression), matrices of the form are generally introduced. Specifically, the linear network seeks to determine a matrix of coefficients and a constant dimensional column vector such that the th output vector in the training set can be approximated by Given this model, we can form a weight matrix of unknown coefficients that, ideally, will map each input training vector into the corresponding output training vector. If the bottom row of the input data matrix is appended with a row of ones leading to the matrix in matrix form, the goal is then to find a weight matrix that minimizes the sum squarederror over the set of data pairs by satisfying the first derivative condition . The least squares solution to this problem is found via the pseudoinverse [33] where
Once the optimal set of weights has been computed, the network response to an unknown input vector can be determined by defining the vector and calculating where is a column vector.
3.4. The Backpropagation Network
Given an input vector , the model for a backpropagation neural network with a single hidden layer consisting of nodes is described as where , , and define the set of network weights and is a “sigmoidal” function that is bounded and monotonically increasing. To perform supervised training, in a manner similar to the linear network, is determined by minimizing the objective function: given the training data defined in (1). Since is no longer linear, numerical techniques such as the gradient descent algorithm and variations thereof are relied upon to compute the set that satisfies the first derivative condition .
Consider the following definitions: where . The first derivative conditions for the network weights prescribed by (16) and (17) can then be written in matrix form as follows: (where “” denotes the matrix transpose), where is a square diagonal matrix such that the diagonal entries consist of components from the vector .
3.5. Classification Measures
After a given classifier is trained, when presented with an input vector of unknown classification, it will respond with an output . The associated class membership is then often determined by applying a minimum distance criterion: where the target vector that is closest to implies the class. Furthermore, when characterizing the performance of a pattern classifier, one often presents a set of test vectors and analyzes the associated output. In addition to determining the class membership, it is also possible to rank the distance between a specific target vector and the classifier response to . In this case a similar distance criterion can be applied in order to rank an input vector with respect to class , For the purposes of this work, (15) facilitates the determination of the class membership and ranking with respect to class for the linear network. Similarly, assuming a set of weights for a trained backpropagation network, (21) and (22) would be applied using (16).
It is well established that, in the case of a normally distributed data, the classification measures presented above minimize the probability of a classification error and are directly related to the statistical significance of a classification decision [1]. Given the neural network as the supervised classification technique and two distinct choices for the set of target vectors and , we demonstrate, in certain instances, that the classification and ranking results do not remain invariant such that and for any input vector .
4. Noninvariance of Secondary Structure Predictions
In this section, we numerically demonstrate that, when different target vector encodings are applied, neural network classifier measures outlined above, in certain cases, are observed to vary widely. For each neural network architecture under consideration, an analytic formulation is then presented in Section 5 in order to explain the observed numerical data.
As mentioned in Section 3.2, numerical experiments are performed first by training using and then comparing classifier performance with their counterparts trained using . Multiple crossvalidation trials are required in order to prevent potential dependency of the evaluated accuracy on the particular training or test sets chosen [7, 15]. In this work, we apply a holdnout strategy similar to that of [14] using 85% of the 12000 encoded sequences as training data (i.e., ) and 15% as test data to validate the classification results. Recognition rates for both the linear and backpropagation rates using either set of target vector encodings were approximately 65% which is typical of this genre of classifiers that have applied similar encoding methodologies [7, 12–14]. Although these aggregate values remain consistent, using (21) and (22) we now present data demonstrating that, while class membership and ranking remain invariant for the linear network, these measures of performance vary considerably for the backpropagation network which was trained with, , seventeen hidden nodes and a mean squared training error less than 0.2. Ranking results from a representative test for the linear and backpropagation networks are presented for the top 20 ranked vectors in Tables 1 and 3. Class membership data are presented in Tables 2 and 4. Observe that, for the linear network, indices for the top 20 ranked vectors remain invariant indicating ranking invariance; in addition, no change in class membership is observed. On the other hand, Tables 3 and 4 clearly indicate a lack of consistency when considering the ranking and class membership of test vectors. A particularly troubling observation is that very few vectors ranked in the top 20 with respect to were ranked in the top 20 with respect to . Furthermore, Table 4 indicates that the class membership of a substantial number of test vectors changed when an alternative set of target vectors was employed. The data also indicates that the greatest change in class membership took place for alpha helical sequences; thus implying that there is substantial disagreement over the modeling of this secondary structure element by the backpropagation network due to a simple transformation of the target vectors.




5. Analysis
The results in Section 4 clearly show that while the pattern recognition results for the linear network remain invariant under a change in target vectors, those for the backpropagation network do not. In this section, we present analytic results in order to clearly explain and understand why these two techniques lead to different conclusions.
5.1. Invariance Formulation
Let us begin by considering two definitions.
Definition 1. Given two sets of target vectors and , the class membership is invariant under a transformation of the target vectors if, for any input vector , where is the output of the classifier with target vectors .
Definition 2. Given two sets of target vectors and , the ranking with respect to a specific class is invariant under a transformation of the target vectors if, for any input vectors and ,
Based upon these definitions, the following has been established [32].
Proposition 3. Given two sets of target vectors and , if the ranking is invariant, then the class membership of an arbitrary input vector will remain invariant.
In the analysis presented, the strategy for characterizing neural network performance depends upon the data from the previous section. For the linear network, since both ranking and classification were observed to remain invariant, it is more sensible to characterize the invariance of this network using Definition 2. Then, based upon Proposition 3, class membership invariance naturally follows. On the other hand, to explain the noninvariance of both class membership and ranking observed in the backpropagation network, the analysis is facilitated by considering Definition 1. The noninvariance of ranking then naturally follows from Proposition 3.
5.1.1. Invariance Analysis for the Linear Network
When the target vectors are subjected to the transformation defined in (2), the network output can be expressed as where is derived from such that the translation vector associated with is appropriately aligned with the correct target vector in matrix . In other words, when the output data matrix in (7) is of the form then where for . Given this network, the following result is applicable [32].
Proposition 4. If (i)the number of training observations exceeds the vector dimension ;(ii)the rows of the matrix are linearly independent;(iii) and are related according to (2);(iv)for some , for all in (2);
then the ranking and, hence, the class membership for the linear network will remain invariant.
In other words, if the columns of the matrix in (25) are all equal, then using (15) and (25) will result in, (23) being satisfied. The above result is applicable to the presented numerical data with and ; hence, ranking and class membership invariances are corroborated by the data in Tables 1 and 2.
5.1.2. Invariance Analysis for the Backpropagation Network
In this section, we seek to characterize the noninvariance observed in class membership using the backpropagation network. If the class membership varies due to a change in target vectors, then this variation should be quantifiable by characterizing the boundary separating two respective classes. The decision boundary between class and class is defined by points such that where , and is the classifier output. Under these circumstances, if an norm is applied in (21), The solution set to this equation consists of all such that is equidistant from and where, for the purposes of this section, is defined by (16). Expanding terms on both sides of this equation leads to the condition If the class membership of a representative vector is to remain invariant under a change of target vectors, this same set of points must also satisfy Assuming that two networks have been trained using two different sets of target vectors and , the set of weights and determines the network output in (16). Without loss of generality, we consider the case where all target vectors are normalized to a value of one such that and for . In this case, the conditions in (30) and (31) become
We first consider a special case where the target vectors are related according to with in (2). Under these circumstances, if the choice is made, it should be clear that, since and , is minimized by the condition . Another way to see this is to observe that (19), (20) remain invariant for this choice of target vectors and network weights. Hence, we have the following.
Proposition 5. For a specific choice of and , if in (2) and then the class membership for the backpropagation network will remain invariant.
Proof. Simply consider (32) and (33) and choose any satisfying It then immediately follows that Therefore, if satisfies (32), then it also satisfies (33) and, hence, is a point on the decision boundary for both networks.
Intuitively, a scaled, rigid rotation of the target vectors should not affect the decision boundary. However, when the more general transformation of (2) is applied with , we now demonstrate that, due to the nonlinearity of in (16), no simple relationship exists such that (32) and (33) can simultaneously be satisfied by the same set of points. We first investigate the possibility of establishing an analytic relationship between the set of weights and for both networks. In other words, we seek, ideally invertible, functions , , and . such that the set can be transformed into . If this can be done, then an analytic procedure similar to that presented in the proof of Proposition 5 can be established in order to relate (32) to (33) for the general case. Since (19), (20) define the set , it is reasonable to rephrase these equations in terms of the objective function : where and such that is the translation vector associated with the target vector referred to by . From these equations, it should be clear that no simple analytic relationship exists that will transform into . A numerical algorithm such as gradient descent will, assuming a local minimum actually exists, arrive at some solution for both and . We must therefore be content with the assumed existence of some set of functions defined by (39). Again, let us consider any point on the decision boundary such that Such a point must also simultaneously satisfy At first glance, a choice such as and for (as in Proposition 4) appears to bring us close to a solution. However, the term involving is problematic. Although a choice such as and would yield the solution to (42), it should be clear that these values would not satisfy (40).
Another way to analyze this problem is to first set for . Then, for any on the decision boundary, from (41) and (42), equate the terms where is some constant and , , and satisfy (40). Given an arbitrary training set defined by (1), it is highly unlikely that this constraint can be satisfied. One remote scenario might be that the terms and are always small. In this case, given a sigmoidal function that is linear near , a linearized version of (43) could be solved using techniques described in [32]. However, this again is an unlikely set of events given an arbitrary training set. Therefore, given the transformation of (2), we are left to conclude that class membership invariance and, hence, ranking invariance are, in general, not achievable using the backpropagation neural network.
5.2. Discussion
Intuitively, given a reasonable target encoding scheme, one would desire that properties related to protein secondary structure would be independent of the target vectors chosen. However, we have presented numerical data and a theoretical foundation demonstrating that secondary structure classification and confidence measures can vary depending on the type of neural network architecture and target vector encoding scheme employed. Specifically, linear network classification has been demonstrated to remain invariant under a change in the target structure encoding scheme while the backpropagation network has not. As increases, for the methodology applied in this work, recognition rates remain consistent with those reported in the literature; however, we have observed that adding more training data does not improve the invariance of classification measures for the backpropagation network. This conclusion is corroborated by the analytic formulation presented above.
6. Conclusions
As pointed out in the introduction, one major purpose of the neural network is to create a stable and reliable model that maps input training data to an output classification with the hope of extracting informative parameters. When methods similar to those in the literature are applied [7, 12–14], we have demonstrated that classifier performance measures can vary considerably. Under these circumstances, parameters derived from a trained network for analytically describing protein secondary structure may not comprise a reliable set for the modelbased approach. Furthermore, classifier variability would imply that a stable parametric model has not been derived. It is in some sense paradoxical that the neural network has been applied for structure classification and, yet, associated parameters have not been applied for describing protein secondary structure. The neural network approach to deriving a solution to the protein secondary structure prediction problem therefore requires deeper exploration.
Acknowledgments
This publication was made possible by Grant Number G12RR017581 from the National Center for Research Resources (NCRR), a component of the National Institutes of Health (NIH). The authors would also like to thank the reviewers for their helpful comments.
References
 C. M. Bishop, Pattern Recognition and Machine Learning, Springer, 2007.
 M. T. Hagan, H. B. Demuth, and M. H. Beale, Neural Network Design, PWS Publishing, 1996.
 M. N. Nguyen and J. C. Rajapakse, “Multiclass support vector machines for protein secondary structure prediction,” Genome Informatics, vol. 14, pp. 218–227, 2003. View at: Google Scholar
 H. J. Hu, Y. Pan, R. Harrison, and P. C. Tai, “Improved protein secondary structure prediction using support vector machine with a new encoding scheme and an advanced tertiary classifier,” IEEE Transactions on Nanobioscience, vol. 3, no. 4, pp. 265–271, 2004. View at: Publisher Site  Google Scholar
 W. Zhong, G. Altun, X. Tian, R. Harrison, P. C. Tai, and Y. Pan, “Parallel protein secondary structure prediction schemes using Pthread and OpenMP over hyperthreading technology,” Journal of Supercomputing, vol. 41, no. 1, pp. 1–16, 2007. View at: Publisher Site  Google Scholar
 J. A. Cuff and G. J. Barton, “Application of enhanced multiple sequence alignment profiles to improve protein secondary structure prediction,” Proteins, vol. 40, pp. 502–511, 2000. View at: Google Scholar
 J. M. Chandonia and M. Karplus, “Neural networks for secondary structure and structural class predictions,” Protein Science, vol. 4, no. 2, pp. 275–285, 1995. View at: Google Scholar
 J. A. Cuff and G. J. Barton, “Evaluation and improvement of multiple sequence methods for protein secondary structure prediction,” Proteins, vol. 34, pp. 508–519, 1999. View at: Google Scholar
 G. E. Crooks and S. E. Brenner, “Protein secondary structure: entropy, correlations and prediction,” Bioinformatics, vol. 20, no. 10, pp. 1603–1611, 2004. View at: Publisher Site  Google Scholar
 L. H. Wang, J. Liu, and H. B. Zhou, “A comparison of two machine learning methods for protein secondary structure prediction,” in Proceedings of 2004 International Conference on Machine Learning and Cybernetics, pp. 2730–2735, chn, August 2004. View at: Google Scholar
 G. Z. Zhang, D. S. Huang, Y. P. Zhu, and Y. X. Li, “Improving protein secondary structure prediction by using the residue conformational classes,” Pattern Recognition Letters, vol. 26, no. 15, pp. 2346–2352, 2005. View at: Publisher Site  Google Scholar
 N. Qian and T. J. Sejnowski, “Predicting the secondary structure of globular proteins using neural network models,” Journal of Molecular Biology, vol. 202, no. 4, pp. 865–884, 1988. View at: Google Scholar
 L. Howard Holley and M. Karplus, “Protein secondary structure prediction with a neural network,” Proceedings of the National Academy of Sciences of the United States of America, vol. 86, no. 1, pp. 152–156, 1989. View at: Google Scholar
 J. M. Chandonia and M. Karplus, “The importance of larger data sets for protein secondary structure prediction with neural networks,” Protein Science, vol. 5, no. 4, pp. 768–774, 1996. View at: Google Scholar
 B. Rost and C. Sander, “Improved prediction of protein secondary structure by use of sequence profiles and neural networks,” Proceedings of the National Academy of Sciences of the United States of America, vol. 90, no. 16, pp. 7558–7562, 1993. View at: Google Scholar
 B. Rost and C. Sander, “Prediction of protein secondary structure at better than 70% accuracy,” Journal of Molecular Biology, vol. 232, no. 2, pp. 584–599, 1993. View at: Publisher Site  Google Scholar
 J. A. Cuff, M. E. Clamp, A. S. Siddiqui, M. Finlay, and G. J. Barton, “JPred: a consensus secondary structure prediction server,” Bioinformatics, vol. 14, no. 10, pp. 892–893, 1998. View at: Publisher Site  Google Scholar
 S. Hua and Z. Sun, “A novel method of protein secondary structure prediction with high segment overlap measure: support vector machine approach,” Journal of Molecular Biology, vol. 308, no. 2, pp. 397–407, 2001. View at: Publisher Site  Google Scholar
 B. Rost, “PHD: predicting onedimensional protein structure by profilebased neural networks,” Methods in Enzymology, vol. 266, pp. 525–539, 1996. View at: Google Scholar
 B. Rost, G. Yachdav, and J. Liu, “The PredictProtein server,” Nucleic Acids Research, vol. 32, pp. W321–W326, 2004. View at: Publisher Site  Google Scholar
 C. Cole, J. D. Barber, and G. J. Barton, “The Jpred 3 secondary structure prediction server,” Nucleic Acids Research, vol. 36, pp. W197–W201, 2008. View at: Publisher Site  Google Scholar
 G. Pollastri, D. Przybylski, B. Rost, and P. Baldi, “Improving the prediction of protein secondary structure in three and eight classes using recurrent neural networks and profiles,” Proteins, vol. 47, no. 2, pp. 228–235, 2002. View at: Publisher Site  Google Scholar
 J. Cheng, A. Z. Randall, M. J. Sweredoski, and P. Baldi, “SCRATCH: a protein structure and structural feature prediction server,” Nucleic Acids Research, vol. 33, no. 2, pp. W72–W76, 2005. View at: Publisher Site  Google Scholar
 D. T. Jones, “Protein secondary structure prediction based on positionspecific scoring matrices,” Journal of Molecular Biology, vol. 292, no. 2, pp. 195–202, 1999. View at: Publisher Site  Google Scholar
 K. Bryson, L. J. McGuffin, R. L. Marsden, J. J. Ward, J. S. Sodhi, and D. T. Jones, “Protein structure prediction servers at University College London,” Nucleic Acids Research, vol. 33, no. 2, pp. W36–W38, 2005. View at: Publisher Site  Google Scholar
 G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Mathematics of Control, Signals, and Systems, vol. 2, no. 4, pp. 303–314, 1989. View at: Publisher Site  Google Scholar
 J. Moody and C. J. Darken, “Fast learning in networks of locally tuned processing units,” Neural Computation, vol. 1, pp. 281–294, 1989. View at: Google Scholar
 T. Poggio and F. Girosi, “Networks for approximation and learning,” Proceedings of the IEEE, vol. 78, no. 9, pp. 1481–1497, 1990. View at: Publisher Site  Google Scholar
 D. F. Specht, “Probabilistic neural networks,” Neural Networks, vol. 3, no. 1, pp. 109–118, 1990. View at: Google Scholar
 P. András, “Orthogonal RBF neural network approximation,” Neural Processing Letters, vol. 9, no. 2, pp. 141–151, 1999. View at: Google Scholar
 P. Baldi and S. Brunak, Bioinformatics: The Machine Learning Approach, MIT Press, 1998.
 E. Sakk, D. J. Schneider, C. R. Myers, and S. W. Cartinhour, “On the selection of target vectors for a class of supervised pattern recognizers,” IEEE Transactions on Neural Networks, vol. 20, no. 5, pp. 745–757, 2009. View at: Google Scholar
 G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins University Press, 1989.
Copyright
Copyright © 2013 Eric Sakk and Ayanna Alexander. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.