Research Article  Open Access
Classification of Cancer Recurrence with AlphaBeta BAM
Abstract
Bidirectional Associative Memories (BAMs) based on first model proposed by Kosko do not have perfect recall of training set, and their algorithm must iterate until it reaches a stable state. In this work, we use the model of AlphaBeta BAM to classify automatically cancer recurrence in female patients with a previous breast cancer surgery. AlphaBeta BAM presents perfect recall of all the training patterns and it has a oneshot algorithm; these advantages make to AlphaBeta BAM a suitable tool for classification. We use data from Haberman database, and leaveoneout algorithm was applied to analyze the performance of our model as classifier. We obtain a percentage of classification of 99.98%.
1. Introduction
Breast cancer is a preponderant disease in the world and it is death cause of women. The women who have suffered from breast cancer and have overcome it have the risk to suffer a relapse; therefore women have to be monitored after the tumor has been extracted.
The prediction of recurrent cancer in women with previous surgery has high monetary and social costs; as a result, many researchers working in the Artificial Intelligent (AI) topic have been attracted to this problem and they have used many AI tools among others for breast cancer prediction. Some of these works are described as follows.
Many methods of AI have shown better results than the obtained by the experimental methods; for example, in 1997 Burke et al. [1] compared the accuracy of TNM staging system with the accuracy of a multilayer backpropagation Artificial Neural Network (ANN) for predicting the 5year survival of patients with breast carcinoma. ANN increased the prediction capacity in 10% obtaining the final result of 54%. They used the following parameters: tumor size, number of positive regional lymph nodes, and distant metastasis.
Domingos [2] used a breast cancer database from UCI repository for classifying survival of patients using the unification of two widely used empirical approaches: rule induction and instancebased learning.
In 2000, Boros et al. [3] used the Logical Analysis of Data method to predict the nature of the tumor: malignant or benign. Breast Cancer (Wisconsin) database was used. The classification capacity was 97.2%. This database was used by Street and Kim [4] who combined several classifiers to create a highscale classifier. Also, it was used by Wang and Witten [5]; they presented a general modeling method for optimal probability prediction over future observations and they obtained the 96.7% of classification.
K. Huang et al. [6] construct a classifier with the Minimax Probability Machine (MPM), which provides a worstcase bound on the probability of misclassification of future data points based on reliable estimates of means and covariance matrices of the classes from the training data points. They used the same database utilized by Domingos. The classification capacity was of 82.5%.
In other types of breast cancer diagnosis, C.L. Huang et al. [7] employed the Support Vector Machine method to predict a breast tumor from the information of five DNA viruses.
In the last two decades, the impact of breast cancer in Mexico has increased [8]. Every year 3500 women die due to breast cancer, becoming the first death cause and the second frequent type of tumor. Therefore, we applied Associative Models to classify recurrence cancer.
The area of Associative Memories, as a relevant part of Computing Sciences, has acquired great importance and dynamism in the activity developed by international research teams, specifically those who research topics related with theory and applications of pattern recognition and image processing. Classification is a specific homework of pattern recognition because its main goal is to recognize some features of patterns and put these patterns into the corresponding class.
Associative Memories have been developed, at the same time with Neural Networks, from the first model of artificial neuron [9] to neural networks models based on modern concepts such as mathematical morphologic [10] getting through the important works of pioneers in neural networks perceptronbased [11–13].
In 1982 Hopfield presents his associative memory; this model is inspired in physical concepts and has as particularity an iterative algorithm [14]. This work has great relevance because Hopfield proved that interactions of simple processing elements similar to neurons give rise to collective computational properties, such as memory stability.
However, Hopfield model has two disadvantages: firstly, associative memory shows a low recall capacity, 0,15n, where n is the dimensions of stored patterns; secondly, Hopfield memory is autoassociative, which means that it is not able to associate different patterns.
In 1988, Kosko [15] developed a heteroassociative memory from two Hopfield memories to overcome the second disadvantage of Hopfield model. Bidirectional Associative Memory (BAM) is based in an iterative algorithm the same as Hopfield. Many later models were based on this algorithm and they replaced the original learning rule with an exponential rule [16–18]; other models used a multiple training method and dummy addition [19] to achieve more pairs of patterns to be stable states and, at the same time, they eliminated spurious states. Lineal programming techniques [20], gradient descent method [21, 22], genetic algorithms [23], and delayed BAMs [24, 25] had been used with the same purpose. There are many other models which are not based on Kosko, so that they are not iterative and have not stability problems: Morphologic [26] and Feedforward [27] BAM. All these models have appeared to overcome the lowcapacity recall problem showed by the first BAM; however, none of them have could recover all training patterns. Besides, these models require the patterns to have certain conditions such as Hamming distance, orthogonality, lineal independence, and lineal programming solutions, among others.
The bidirectional associative memory model used in this work is based on AlphaBeta Associative Memories [28]; it is not an iterative process and does not have stability problems. AlphaBeta BAM recall capacity is maximum: , where and are the dimensions of input and output patterns, respectively. This model always shows perfect recall without any condition. AlphaBeta BAM perfect recall has mathematical bases [29]. It has been demonstrated that this model has a complexity of O(n^{2}) (see Section 2.4). Its main application is pattern recognition and it has been applied as translator [30] and fingerprints identifier [31].
Because AlphaBeta BAM shows perfect recall, it is used as a classifier in this work. We used Haberman database, which contains data from cancer recurrence patients, because it has been included in several works to prove other classification methods such as Support Vector Machines (SVMs) combined with Cholesky Factorization [32], Distance Geometry [33], Bagging technique [34], ModelAveraging with Discrete Bayesian Network [35], ingroup and outgroup concept [36], and ARTMAP fuzzy neuronal networks [37]. AlphaBeta BAM pretends to surpass the previous results, doing the observation that none of the aforementioned works have used associative models for classifying.
In Section 2 we present basic concepts of associative models along with the description of AlphaBeta associative memories and AlphaBeta BAM and its complexity. Experiments and results are showed in Section 3 along with the analysis of our proposal with leaveoneout method.
2. AlphaBeta Bidirectional Associative Memories
In this section AlphaBeta Bidirectional Associative Memory is presented. However, since it is based on the AlphaBeta autoassociative memories, a summary of this model will be given before presenting our model of BAM.
2.1. Basic Concepts
Basic concepts about associative memories were established three decades ago in [38–40]; nonetheless here we use the concepts, results, and notation introduced in [28]. An associative memory M is a system that relates input patterns and outputs patterns, as follows: x→M→y with x and y being the input and output pattern vectors, respectively. Each input vector forms an association with a corresponding output vector. For k integer and positive, the corresponding association will be denoted as . Associative memory M is represented by a matrix whose ijth component is m_{ij}. Memory M is generated from an a priori finite set of known associations, known as the fundamental set of associations.
If is an index, the fundamental set is represented as with p being the cardinality of the set. The patterns that form the fundamental set are called fundamental patterns. If it holds that , for all , M is autoassociative; otherwise it is heteroassociative; in this case it is possible to establish that for which . A distorted version of a pattern x^{k} to be recuperated will be denoted as . If when feeding a distorted version of x^{ϖ} with to an associative memory M, it happens that the output corresponds exactly to the associated pattern yϖ, we say that recuperation is perfect.
2.2. AlphaBeta Associative Memories
Among the variety of associative memory models described in the scientific literature, there are two models that, because of their relevance, it is important to emphasize morphological associative memories which were introduced by Ritter et al. [39] and AlphaBeta associative memories. Because of their excellent characteristics, which allow them to be superior in many aspects to other models for associative memories, morphological associative memories served as starter point for the creation and development of the AlphaBeta associative memory.
The AlphaBeta associative memories are of two kinds and are able to operate in two different modes. The operator α is useful at the learning phase, and the operator β is the basis for the pattern recall phase. The heart of the mathematical tools used in the AlphaBeta model is two binary operators designed specifically for these memories. These operators are defined as follows: first, we define the sets and , and then the operators α and β are defined in Tables 1 and 2, respectively:


The sets A and B, the and operators, along with the usual (minimum) and (maximum) operators form the algebraic system which is the mathematical basis for the AlphaBeta associative memories.
Below are shown some characteristics of AlphaBeta autoassociative memories.
(1)The fundamental set takes the form .(2)Both input and output fundamental patterns are of the same dimension, denoted by n.(3)The memory is a square matrix, for both modes, V and . If , thenand according to , we have that v_{ij} and , for all and for all .
In recall phase, when a pattern is presented to memories V and , the th components of recalled patterns are
2.3. AlphaBeta BAM
Generally, any bidirectional associative memory model appearing in current scientific literature could be draw as Figure 1 shows.
General BAM is a “black box’’ operating in the next way: given a pattern , associated pattern is obtained, and given the pattern , associated pattern is recalled. Besides, if we assume that and are noisy versions of and , respectively, it is expected that BAM could recover all corresponding free noise patterns and .
The model used in this paper has been named AlphaBeta BAM since AlphaBeta associative memories, both max and min, play a central role in the model design. However, before going into detail over the processing of an AlphaBeta BAM, we will define the following.
In this work we will assume that AlphaBeta associative memories have a fundamental set denoted by and , with , , , , and . Also, it holds that all input patterns are different; M that is if and only if . If for all it holds that , the AlphaBeta memory will be autoassociative; if on the contrary, the former affirmation is negative, that is, for which it holds that , then the AlphaBeta memory will be heteroassociative.
Definition 2.1 (OneHot). Let the set be and , , , such that . The kth onehot vector of bits is defined as vector for which it holds that the th component is and the set of the components are , for all , .
Remark 2.2. In this definition, the value is excluded since a onehot vector of dimension 1, given its essence, has no reason to be.
Definition 2.3 (ZeroHot). Let the set A be and , , , such that . The kth zerohot vector of bits is defined as vector for which it holds that the kth component is and the set of the components are , , .
Remark 2.4. In this definition, the value is excluded since a zerohot vector of dimension 1, given its essence, has no reason to be.
Definition 2.5 (Expansion vectorial transform). Let the set A be and , . Given two arbitrary vectors and , the expansion vectorial transform of order , , is defined as , a vector whose components are for and for .
Definition 2.6 (Contraction vectorial transform). Let the set A be and , such that . Given one arbitrary vector , the contraction vectorial transform of order , , is defined as , a vector whose components are for .
In both directions, the model is made up by two stages, as shown in Figure 2.
For simplicity, the first will describe the process necessary in one direction, in order to later present the complementary direction which will give bidirectionality to the model (see Figure 3).
The function of Stage 2 is to offer a as output given an as input.
Now we assume that as input to Stage 2 we have one element of a set of p orthonormal vectors. Recall that the Linear Associator has perfect recall when it works with orthonormal vectors. In this work we use a variation of the Linear Associator in order to obtain , parting from a onehot vector in its kth coordinate.
For the construction of the modified Linear Associator, its learning phase is skipped and a matrix M representing the memory is built. Each column in this matrix corresponds to each output pattern . In this way, when matrix M is operated with a onehot vector , the corresponding will always be recalled.
The task of Stage 1 is the following: given an or a noisy version of it (), the onehot vector must be obtained without ambiguity and with no condition. In its learning phase, Stage 1 has the following algorithm.
(1)For do expansion: .(2)For and : .(3)For do expansion: .(4)For and , (5)Create modified Linear Associator:Recall phase is described through the following algorithm.
()Present, at the input to Stage 1, a vector from the fundamental set for some index .()Build vector: .()Do expansion: .()Obtain vector: .()Do contraction: .If r is onehot vector, it is assured that , then . STOP. Else:() For .() Do expansion: .() Obtain a vector: .() Do contraction: .()If s is zerohot vector, then it is assured that , , where is the negated vector of . STOP. Else:()Do operation , where ⋀ is the symbol of the logical AND operator, so . STOP.The process in the contrary direction, which is presenting pattern as input to the Alpha/Beta BAM and obtaining its corresponding , is very similar to the one described above. The task of Stage 3 is to obtain a onehot vector given a . Stage 4 is a modified Linear Associator built in similar fashion to the one in Stage 2.
2.4. The AlphaBeta BAM Algorithm Complexity
An algorithm is a finite set of precise instructions for the realization of a calculation or to solve a problem [41]. In general, it is accepted that an algorithm provides a satisfactory solution when it produces a correct answer and is efficient. One measure of efficiency is the time required by the computer in order to solve a problem using a given algorithm. A second measure of efficiency is the amount of memory required to implement the algorithm when the input data are of a given size.
The analysis of the time required to solve a problem of a particular size implies finding the time complexity of the algorithm. The analysis of the memory needed by the computer implies finding the space complexity of the algorithm.
Space Complexity
In order to store the patterns, a matrix is needed. This matrix will have dimensions . Input patterns and the added vectors, both onehot and zerohot, are stored in the same matrix. Since , then this values can be represented by character variables, taking 1 byte each. The total amount of bytes will be .
A matrix is needed to store the patterns. This matrix will have dimensions . Output patterns and the added vectors, both onehot and zerohot, are stored in the same matrix. Since , then this values can be represented by character variables, taking 1 byte each. The total amount of bytes will be .
During the learning phase, 4 matrices are needed: two for the AlphaBeta autoassociative memories of type max, Vx and Vy, and two more for the AlphaBeta autoassociative memories of type min, Λx y Λy. Vx and Λx have dimensions of , while Vy and Λy have dimensions . Given that these matrices hold only positive integer numbers, then the values of their components can be represented with character variables of 1 byte of size. The total amount of bytes will be and .
A vector is used to hold the recalled onehot vector, whose dimension is p. Since the components of any onehot vector take the values of 0 and 1, these values can be represented by character variables, occupying 1 byte each. The total amount of bytes will be .
The total amount of bytes required to implement an AlphaBeta BAM is
Time Complexity
The time complexity of an algorithm can be expressed in terms of the number of operations used by the algorithm when the input has a particular size. The operations used to measure time complexity can be integer compare, integer addition, integer division, variable assignation, logical comparison, or any other elemental operation.
The following is defined: EO: elemental operation; n_pares: number of associated pairs of patterns; : dimension of the patterns plus the addition of the onehot or zerohot vectors.
The recalling phase algorithm will be analyzed, since this is the portion of the whole algorithm that requires a greater number of elemental operations.
Recalling Phase
u = 0; ()
while(u<n_pares) ()
i = 0; ()
while(i<n) ()
j = 0; ()
while(j<n) ()
if(y[u][i]==0 && y[u][j]==0) ()
t=1; ()
else if(y[u][i]==0 && y[u][j]==1) (a)
t=0;
else if(y[u][i]==1 && y[u][j]==0) (b)
t=2;
else t=1;
if(u==0) ()
Vy[i][j]=t; ()
else
if(Vy[i][j]<t) ()
Vy[i][j]=t; ()
j++; ()
i++; ()
u++; ()
()1 EO, assignation()n_pares EO, comparison()n_pares EO, assignation()n_pares*n EO, comparison()n_pares*n EO, assignation()n_pares*n *n EO, comparison(a)n_pares*n *n EO, comparison: y[u][i]==0(b)n_pares*n *n EO, relational operation AND: &&(c)n_pares*n *n EO, comparison: y[u][j]==0()There is allways an allocation to variable t, n_pares*n *n EO()Both if sentences (a and b) have the same probability of being executed, n_pares*n *(n/2)()n_pares*n *n EO, comparison()This allocation is done only once, 1 EO() (n_pares*n *n)1 EO, comparison()Allocation has half probability of being run, n_pares*n *(n/2)()n_pares*n *n EO, increment()n_pares*n EO, increment()n_pares EO, increment
The total number of EOs is .
From the total of EOs obtained, n_pares is fixed with value 50, resulting in a function only dependant on the size of the patterns: .
In order to analyze the feasibility of the algorithm we need to understand how fast the mentioned function grows as the value of rises. Therefore, the BigO notation [33], shown below, will be used.
Let f and g be functions from a set of integer or real numbers to a set of real numbers. It is said that f(x) is O(g(x)) if there exist two constants C and k such that
The number of elemental operations obtained from our algorithm was
A function g(x) and constants C and k must be found, such that the inequality holds. We propose
Then if , and , we have that
3. Experiments and Results
The database used in this work for AlphaBeta BAM performance analysis as classifier was proposed by Heberman and it is available in [42]. This database has 306 instances with 3 attributes, which are (1) age of patient at time of operation, (2) patient’s year of operation, and (3) number of positive axillary nodes detected. Database has a survival status (class attribute): () the patient survived 5 years or longer and () the patient died within five year.
The number of instances was reduced at 287 due to some records appeared as duplicated or in some cases records were associated with a same class. From the 287 records, 209 belonged to class 1 and the 78 remainder belonged to class 2.
Implementation of AlphaBeta BAM was accomplished on a Sony VAIO laptop with Centrino Duo processor and language programming was Visual C++ 6.0.
Leaveoneout method [43] was used to carry out the performance analysis of AlphaBeta BAM classification. This method operates as follows: a sample is removed from the total set of samples and these 286 samples are used as the fundamental set; therefore, we used the samples to create the BAM. Once AlphaBeta BAM learnt, we proceeded to classify the 286 samples along with the removed sample, and this means that we presented to the BAM every sample belonging to fundamental set as well as the removed sample.
The process was repeated 287 times, which corresponds to the number of records. AlphaBeta BAM had the following behavior: in 278 times, AlphaBeta BAM classified in perfect way the excluded sample and in the 9 remainder probes it did not achieve to classify correctly. Here, it must be emphasized that incorrect classification appears just with the excluded sample, because in all probes belonging to fundamental set, AlphaBeta BAM shows perfect recall. Therefore, in 278 times the classification percentage was of 100% and 99.65% in the remainder. Calculating the average of classification from the 287 probes, we observed that AlphaBeta BAM classification was of 99.98%.
In Table 3 there can be observed results comparisons of some classification methods such as SVMBagging, ModelAveraging, ingroup/outgroup method, fuzzy ARTMAP neural network, and AlphaBeta BAM. Methods presented in [24, 25] do not show classification results and they just indicate that their algorithms are used to accelerate the method performance.
AlphaBeta BAM exceeds the other methods by a 9.98% and none of these algorithms use an associative model.
We must mention that Haberman database has records very similar to each other, and this feature could complicate the performance of some BAMs, due to the restriction respecting to the data characteristics, for example, Hamming distance or orthogonality. However, AlphaBeta BAM does not present these kinds of data limitations and we had proved it with the obtained results.
4. Conclusions
The use of bidirectional associative memories as classifiers using Haberman database has not been reported before. In this work we use the model of AlphaBeta BAM to classify cancer recurrence.
Our model present perfect recall of the fundamental set in contrast with Koskobased models or morphological BAM; this feature makes AlphaBeta BAM the suitable tool for pattern recognition and, particularly, for classification.
We compared our results with the following methods: SVMBagging, ModelAveraging, ingroup/outgroup method, and fuzzy ARTMAP neural network, and we found that AlphaBeta BAM is the best classifier when Haberman database was used, because the classification percentage was of 99.98% and exceeds the other methods by a 9.98%.
With these results we can prove that AlphaBeta BAM not just has perfect recall but also can recall the most of records not belonging to training patterns.
Even though patterns are very similar to each other, AlphaBeta BAM was able to recall many of the data, so that it could perform as a great classifier. Most of Koskobased BAMs have low recalling when patterns show features as Hamming distance, orthogonality and linear independence; however, AlphaBeta BAM does not impose any restriction in the nature of data.
The next step in our research is to test AlphaBeta BAM as classifier using other databases as Breast Cancer (Wisconsin) and Breast Cancer (Yugoslavia) and with standard databases as Iris Plant or MNIST; therefore we can obtain the general performance of our model. However, we have to take into account the “no free lunch” theorem which asserts that any algorithm could be the best in one type of problems but it can be the worst in other types of problems. In our case, our results showed that AlphaBeta BAM is the best classifier when Haberman database was used.
Acknowledgments
The authors would like to thank the Instituto Politécnico Nacional (COFAA and SIP) and SNI for their economical support to develop this work.
References
 H. B. Burke, P. H. Goodman, D. B. Rosen et al., “Artificial neural networks improve the accuracy of cancer survival prediction,” Cancer, vol. 79, no. 4, pp. 857–862, 1997. View at: Publisher Site  Google Scholar
 P. Domingos, “Unifying instancebased and rulebased induction,” Machine Learning, vol. 24, no. 2, pp. 141–168, 1996. View at: Google Scholar
 E. Boros, P. Hammer, and T. Ibaraki, “An implementation of logical analysis of data,” IEEE Transactions on Knowledge and Data Engineering, vol. 12, no. 2, pp. 292–306, 2000. View at: Publisher Site  Google Scholar
 W. N. Street and Y. Kim, “A streaming ensemble algorithm (SEA) for largescale classification,” in Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '01), pp. 377–382, ACM, San Francisco, Calif, USA, August 2001. View at: Google Scholar
 Y. Wang and I. H Witten, “Modeling for optimal probability prediction,” in Proceedings of the 9th International Conference on Machine Learning (ICML '02), pp. 650–657, July 2002. View at: Google Scholar
 K. Huang, H. Yang, and I. King, “Biased minimax probability machine for medical diagnosis,” in Proceedings of the 8th International Symposium on Artificial Intelligence and Mathematics (AIM '04), Fort Lauderdale, Fla, USA, January 2004. View at: Google Scholar
 C.L. Huang, H.C. Liao, and M.C. Chen, “Prediction model building and feature selection with support vector machines in breast cancer diagnosis,” Expert Systems with Applications, vol. 34, no. 1, pp. 578–587, 2008. View at: Publisher Site  Google Scholar
 O. LópezRíos, E. C. LazcanoPonce, V. TovarGuzmán, and M. HernándezAvila, “La epidemia de cáncer de mama en México. Consecuencia de la transición demográfica?” Salud Publica de Mexico, vol. 39, no. 4, pp. 259–265, 1997. View at: Google Scholar
 W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” Bulletin of Mathematical Biophysics, vol. 5, pp. 115–133, 1943. View at: Google Scholar  Zentralblatt MATH  MathSciNet
 G. X. Ritter and P. Sussner, “An introduction to morphological neural networks,” in Proceedings of the 13th International Conference on Pattern Recognition, vol. 4, pp. 709–717, 1996. View at: Google Scholar
 F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain,” Psychological Review, vol. 65, no. 6, pp. 386–408, 1958. View at: Publisher Site  Google Scholar
 B. Widrow and M. A. Lehr, “30 years of adaptive neural networks: perceptron, madaline, and backpropagation,” Proceedings of the IEEE, vol. 78, no. 9, pp. 1415–1442, 1990. View at: Publisher Site  Google Scholar
 P. J. Werbos, “Backpropagation through time: what it does and how to do it,” Proceedings of the IEEE, vol. 78, no. 10, pp. 1550–1560, 1990. View at: Publisher Site  Google Scholar
 J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proceedings of the National Academy of Sciences of the United States of America, vol. 79, no. 8, pp. 2554–2558, 1982. View at: Publisher Site  Google Scholar  MathSciNet
 B. Kosko, “Bidirectional associative memories,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 18, no. 1, pp. 49–60, 1988. View at: Google Scholar  MathSciNet
 Y. Jeng, C. C Yeh, and T. D. Chiveh, “Exponential bidirectional associative memories,” Electronics Letters, vol. 26, no. 11, pp. 717–718, 1990. View at: Publisher Site  Google Scholar
 W.J. Wang and D.L. Lee, “Modified exponential bidirectional associative memories,” Electronics Letters, vol. 28, no. 9, pp. 888–890, 1992. View at: Publisher Site  Google Scholar
 S. Chen, H. Gao, and W. Yan, “Improved exponential bidirectional associative memory,” Electronics Letters, vol. 33, no. 3, pp. 223–224, 1997. View at: Publisher Site  Google Scholar
 Y. F. Wang, J. B. Cruz Jr., and J. H. Mulligan Jr., “Two coding strategies for bidirectional associative memory,” IEEE Transactions on Neural Networks, vol. 1, no. 1, pp. 81–92, 1990. View at: Publisher Site  Google Scholar
 Y. F. Wang, J. B. Cruz Jr., and J. H. Mulligan Jr., “Guaranteed recall of all training pairs for bidirectional associative memory,” IEEE Transactions on Neural Networks, vol. 2, no. 6, pp. 559–567, 1991. View at: Publisher Site  Google Scholar
 R. Perfetti, “Optimal gradient descent learning for bidirectional associative memories,” Electronics Letters, vol. 29, no. 17, pp. 1556–1557, 1993. View at: Publisher Site  Google Scholar
 G. Zheng, S. N. Givigi, and W. Zheng, “A new strategy for designing bidirectional associative memories,” in Advances in Neural Networks, vol. 3496 of Lecture Notes in Computer Science, pp. 398–403, Springer, Berlin, Germany, 2005. View at: Publisher Site  Google Scholar
 D. Shen and J. B. Cruz Jr., “Encoding strategy for maximum noise tolerance bidirectional associative memory,” IEEE Transactions on Neural Networks, vol. 16, no. 2, pp. 293–300, 2005. View at: Publisher Site  Google Scholar
 S. Arik, “Global asymptotic stability analysis of bidirectional associative memory neural networks with time delays,” IEEE Transactions on Neural Networks, vol. 16, no. 3, pp. 580–586, 2005. View at: Publisher Site  Google Scholar
 J. H. Park, “Robust stability of bidirectional associative memory neural networks with time delays,” Physics Letters A, vol. 349, no. 6, pp. 494–499, 2006. View at: Publisher Site  Google Scholar
 G. X. Ritter, J. L. DiazdeLeon, and P. Sussner, “Morphological bidirectional associative memories,” Neural Networks, vol. 12, no. 6, pp. 851–867, 1999. View at: Publisher Site  Google Scholar
 Y. Wu and D. A. Pados, “A feedforward bidirectional associative memory,” IEEE Transactions on Neural Networks, vol. 11, no. 4, pp. 859–866, 2000. View at: Publisher Site  Google Scholar
 C. YáñezMárquez, Associative memories based on order relations and binary operators, Ph.D. thesis, Center for Computing Research, Mexico City, Mexico, 2002.
 M. E. AcevedoMosqueda, C. YáñezMárquez, and I. LópezYáñez, “Alphabeta bidirectional associative memories: theory and applications,” Neural Processing Letters, vol. 26, no. 1, pp. 1–40, 2007. View at: Publisher Site  Google Scholar
 M. E. AcevedoMosqueda, C. YáñezMárquez, and I. LópezYáñez, “Alphabeta bidirectional associative memories based translator,” International Journal of Computer Science and Network Security, vol. 6, no. 5A, pp. 190–194, 2006. View at: Google Scholar
 M. E. AcevedoMosqueda, C. YáñezMárquez, and I. LópezYáñez, “Alphabeta bidirectional associative memories,” International Journal of Computational Intelligence Research, vol. 3, no. 1, pp. 105–110, 2007. View at: Google Scholar  MathSciNet
 D. DeCoste, “Anytime querytuned kernel machines via cholesky factorization,” in Proceedings of the SIAM International Conference on Data Mining (SDM '03), 2003. View at: Google Scholar
 D. DeCoste, “Anytime intervalvalue outputs for kernel machines: fast support vector machine classification via distance geometry,” in Proceedings of the International Conference on Machine Learning (ICML '02), 2002. View at: Google Scholar
 Y. Zhang and W. N. Street, “Bagging with adaptive costs,” in Proceedings of the 5th IEEE International Conference on Data Mining (ICDM '05), pp. 825–828, Houston, Tex, USA, November 2005. View at: Publisher Site  Google Scholar
 D. Dash and G. F. Cooper, ModelAveraging with Discrete Bayesian Network Classifiers, Cambridge, UK.
 A. Thammano and J. Moolwong, “Classification algorithm based on human social behavior,” in Proceedings of the 7th IEEE International Conference on Computer and Information Technology (CIT '07), pp. 105–109, Aizuwakamatsu, Japan, October 2007. View at: Publisher Site  Google Scholar
 G. A. Carpenter, S. Grossberg, N. Markuzon, J. H. Reynolds, and D. B. Rosen, “Fuzzy ARTMAP: a neural network architecture for incremental supervised learning of analog multidimensional maps,” IEEE Transactions on Neural Networks, vol. 3, no. 5, pp. 698–713, 1992. View at: Publisher Site  Google Scholar
 T. Kohonen, “Correlation matrix memories,” IEEE Transactions on Computers, vol. 21, no. 4, pp. 353–359, 1972. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 G. X. Ritter, P. Sussner, and J. L. DiazdeLeon, “Morphological associative memories,” IEEE Transactions on Neural Networks, vol. 9, pp. 281–293, 1998. View at: Publisher Site  Google Scholar
 C. YáñezMárquez and J. L. Díaz de León, “Memorias asociativas basadas en relaciones de orden y operaciones binarias,” Computación y Sistemas, vol. 6, no. 4, pp. 300–311, 2003. View at: Google Scholar
 K. Rosen, Discrete Mathematics and Its Applications, McGrawHill, Estados Unidos, Brazil, 1999.
 A. Asuncion and D. J. Newman, “UCI machine learning repository,” University of California, School of Information and Computer Science, Irvine, Calif, USA, 2007, http://archive.ics.uci.edu/ml. View at: Google Scholar
 A. R. Webb, Statistical Pattern Recognition, John Wiley & Sons, West Sussex, UK, 2002. View at: MathSciNet
Copyright
Copyright © 2009 María Elena Acevedo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.