Mathematical Problems in Engineering

Volume 2010, Article ID 356029, 27 pages

http://dx.doi.org/10.1155/2010/356029

## Associative Models for Storing and Retrieving Concept Lattices

^{1}Department of Communications and Electronic Engineering at Superior School of Mechanical and Electrical Engineering, National Polytechnic Institute, Avenue IPN s/n, Col. Lindavista, C.P. 07738 Mexico City, Mexico^{2}Artificial Intelligence Laboratory, Computation Research Center, National Polytechnic Institute, Avenue Juan de Dios Bátiz s/n, C.P. 07738 Mexico City, Mexico

Received 17 March 2010; Accepted 21 June 2010

Academic Editor: Wei-Chiang Hong

Copyright © 2010 María Elena Acevedo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Alpha-beta bidirectional associative memories are implemented for storing concept lattices. We use Lindig's algorithm to construct a concept lattice of a particular context; this structure is stored into an associative memory just as a human being does, namely, associating patterns. Bidirectionality and perfect recall of Alpha-Beta associative model make it a great tool to store a concept lattice. In the learning phase, objects and attributes obtained from Lindig's algorithm are associated by Alpha-Beta bidirectional associative memory; in this phase the data is stored. In the recalling phase, the associative model allows to retrieve objects from attributes or vice versa. Our model assures the recalling of every learnt concept.

#### 1. Introduction

Concept Lattices is the common name for a specialized form of Hasse diagrams [1] that is used in conceptual data processing. Concept Lattices are a principled way of representing and visualizing the structure of symbolic data that emerged from Rudolf Wille efforts to restructure lattice and order theory in the 1980s. Conceptual data processing, also known as Formal Concept Analysis, has become a standard technique in data and knowledge processing that has given rise to applications in data visualization, data mining, information retrieval (using ontologies), and knowledge management. Organization of discovered concepts in the form of a lattice-structure has many advantages from the perspective of knowledge discovery. It facilitates insights into dependencies among different concepts mined from a dataset. Lattices of concepts have been implemented with a number of different algorithms [2–7]. Any of them can generate a very large number of concepts; therefore, a suitable method is required for an efficient storage and retrieval of parts of the lattice. The task of efficiently organizing and retrieving various nodes of a lattice is the focus of this work. A concept is a pair that consists of a set of objects and a particular set of attribute values shared by the objects. From an initial table, with rows representing the objects and columns representing the attributes, a concept lattice can be obtained. From this structure, we can retrieve an object from the attribute or vice versa, and these pairs form a concept.

The main goal of an Associative Memory is to associate pairs of patterns for recalling one pattern presenting its corresponding pattern; the recalling is done in one direction only. In the particular case of Bidirectional Associative Memories (BAM), we can recall any of the two patterns belonging to a pair just presenting one of them; therefore, the recalling is in both directions. This behavior allows BAM to be a suitable tool for storing and retrieving concepts which form a particular lattice concept. The first step for achieving this task is to apply any of the existing algorithms to obtain the lattice concept; in this work, we use the Linding’s algorithm [5]; then we store each node (concept) associating the objects and attributes forming that concept. Once we stored all concepts, we are able to retrieve them by presenting an object or an attribute. The model of BAM used here is the Alpha-Beta Bidirectional Associative Memory [8]. The main reason for using this model is because it presents perfect recall of the training set; this means that it can recall every pair of patterns that it associated, no matter the size of the patterns or the number of these. This advantage is not presented by other BAM models which present stability and convergence problems or limit their use for a particular number of patterns or to the nature of them, such as, Hamming distance or linear dependency [9–19].

In Section 2, we present a brief discussion on Formal Context Analysis. In Section 3, we introduce the basic concepts of Associative Models, in particular the Alpha-Beta Model, because it is the base of Alpha-Beta BAM. Then, we present the theoretical foundations of our associative model which assure the perfect recall of the training set of patterns with no limits in the number or nature of patterns. We describe the software that implements our algorithm in Section 4 and we show an example.

#### 2. **Formal Concept Analysis**

Formal Concept Analysis (FCA) was first proposed by Wille in 1982 [20] as a mathematical framework for performing data analysis. It provides a conceptual analytical tool for investigating and processing given information explicitly [21]. Such data is structured into units, which are formal abstractions of “*concepts*” of human thought allowing meaningful and comprehensible interpretation. FCA models the world as being composed of *objects *and *attributes*. It is assumed that an incident relation connects objects to attributes. The choice of what is an object and what is an attribute is dependent on the domain in which FCA is applied. Information about a domain is captured in a “formal context”. A formal context is merely a formalization that encodes only a small portion of what is usually referred to as a “*context*”. The following definition is crucial to the theory of FCA.

*Definition 2.1. *A formal context is a triplet consisting of two sets (set of objects) and (set of attributes) and a relation between and *. *

*Definition 2.2. *A formal concept in a formal context is a pair of sets and such that and (completeness constraint), where (i.e., the set of attributes common to all the objects in ), and (i.e., the set of objects that have all attributes in ). By we denote the fact that object has attribute .

The set of all concepts of a context is denoted by . This consists of all pairs such that and , where and .

*Definition 2.3. *Specificity-generality order relationship. If () and () are concepts of a context, then () is called a subconcept of () if (or equivalently ). This sub-super concept relation is written as . According to this definition, a subconcept always contains fewer objects and greater attributes than any of its super concepts.

##### 2.1. Concept Lattice

A set of all concepts of the context (denoted by ) when ordered with the order relation (a subsumption relation) defined above forms a *concept lattice *of the context and is denoted by .

A lattice is an ordered set *V *with an order relation in which for any given two elements and , the *supremum* and the *infimum* elements always exist in *V*. Furthermore, such a lattice is called a *complete lattice *if supremum and infimum elements exist for any subset *X *of *V*. The fundamental theorem of FCA states that the set of formal concepts of a formal context forms a complete lattice.

This complete lattice, which is composed by *formal concepts, *is called a *concept lattice*.

A Concept lattice can be visualized as a graph with nodes and edges/links. The concepts at the nodes from which two or more lines run up are called *meet *concepts (i.e., nodes with more than one parent) and the concepts at the nodes from which two or more lines run down are called *join *concepts (i.e., nodes with more than one child).

A *join concept* groups objects which share the same attributes and a *meet concept* separates out objects that have combined attributes from different parents (groups of objects). Each of these join and meet concepts creates a new sub- or super-category or class of a concept.

#### 3. Alpha-Beta Bidirectional Associative Memories

In this section, the Alpha-Beta Bidirectional Associative Memory is presented. However, since it is based on the Alpha-Beta autoassociative memories, a summary of this model will be given before presenting our model of BAM.

##### 3.1. Basic Concepts

Basic concepts about associative memories were established three decades ago in [22–24]; nonetheless, here we use the concepts, results, and notation introduced in [25]. An associative memory is a system that relates input patterns and outputs patterns, as follows: with and the input and output pattern vectors, respectively. Each input vector forms an association with a corresponding output vector. For integer and positive, the corresponding association will be denoted as (). Associative memory is represented by a matrix whose th component is . Memory is generated from an a priori finite set of known associations, known as the fundamental set of associations.

If is an index, the fundamental set is represented as with *p* being the cardinality of the set. The patterns that form the fundamental set are called fundamental patterns. If it holds that , for all is *autoassociative*, otherwise it is *heteroassociative*; in this case, it is possible to establish that for which . A distorted version of a pattern to be recovered will be denoted as . If when feeding a distorted version of with to an associative memory , it happens that the output corresponds exactly to the associated pattern , we say that recall is perfect.

##### 3.2. **Alpha-Beta Associative Memories**

Among the variety of associative memory models described in the scientific literature, there are two models that, because of their relevance, it is important to emphasize: morphological associative memories which were introduced by Ritter et al. [18] and Alpha-Beta associative memories. Because of their excellent characteristics, which allow them to be superior in many aspects to other models for associative memories, morphological associative memories served as a starter point for the creation and development of the Alpha-Beta associative memories.

The Alpha-Beta associative memories [25] are of two kinds and are able to operate in two different modes. The operator is useful at the learning phase, and the operator is the basis for the pattern recall phase. The heart of the mathematical tools used in the Alpha-Beta model is two binary operators designed specifically for this model. These operators are defined as follows: first, we have the sets and ; then the operators and are defined in Tables 1 and 2, respectively.

The sets *A* and *B*, the and operators, along with the usual (minimum) and (maximum) operators, form the algebraic system which is the mathematical basis for the Alpha-Beta associative memories. Below are shown some characteristics of Alpha-Beta autoassociative memories.(1)The fundamental set takes the form .(2)Both input and output fundamental patterns are of the same dimension, denoted by *n*.(3)The memory is a square matrix, for both modes, **V** and . If , then
And according to , we have that and , for all and for all .

In the recall phase, when a pattern is presented to memories **V** and , the th components of recalled patterns are
The next two theorems show that Alfa-Beta autoassociative memories max and min are immune to certain amount of additive and subtractive noise, respectively. These theorems have the original numbering presented in [25] and are an important part of the mathematical foundations for Alfa-Beta BAM theory.

Theorem 3.1. *Let be the fundamental set of an autoassociative Alpha-Beta memory of type ∨ represented by V, and let ∈ A^{n} be a pattern altered with additive noise with respect to some fundamental pattern , with . If is presented to V as input, and also for every it holds that , which is dependent on and such that , then recall is perfect; that is, to say that .*

Theorem 3.2. *Let be the fundamental set of an autoassociative Alpha-Beta memory of type ∧ represented by Λ, and let be a pattern altered with subtractive noise with respect to some fundamental pattern , with . If is presented to memory Λ as input, and also for every it holds that which is dependent on and , such that , then recall is perfect; that is, to say that .*

With these bases we proceed to describe Alfa-Beta BAM model.

##### 3.3. **Alpha-Beta Bidirectional Associative Memories**

Usually, any bidirectional associative memory model appearing in current scientific literature has the following scheme showed in Figure 1.

A BAM is a “black box” operating in the following way: given a pattern , associated pattern is obtained, and given the pattern , associated pattern is recalled. Besides, if we assume that and are noisy versions of and , respectively, it is expected that BAM could recover all corresponding free noise patterns and .

The first bidirectional associative memory (BAM), introduced by Kosko [26], was the base of many models presented later. Some of these models substituted the learning rule for an exponential rule [9–11]; others used the method of multiple training and dummy addition in order to reach a greater number of stable states [12], trying to eliminate spurious states. With the same purpose, linear programming techniques [13] and the descending gradient method [14, 15] have been used, besides genetic algorithms [16] and BAM with delays [17, 27]. Other models of noniterative bidirectional associative memories exist, such as, morphological BAM [18] and Feedforward BAM [19]. All these models have arisen to solve the problem of low pattern recall capacity shown by the BAM of Kosko; however, none has been able to recall all the trained patterns. Also, these models demand the fulfillment of some specific conditions, such as a certain Hamming distance between patterns, solvability by linear programming, orthogonality between patterns, among other.

The model of bidirectional associative memory presented in this paper is Alpha-Beta BAM [28] and is based on the Alpha-Beta associative memories [25]; it is not an iterative process and does not present stability problems. Pattern recall capacity of the Alpha-Beta BAM is maximal, being , where and are the input and output patterns dimension, respectively. Also, it always shows perfect pattern recall without imposing any condition.

The model used in this paper has been named Alpha-Beta BAM since Alpha-Beta associative memories, both max and min, play a central role in the model design. However, before going into detail over the processing of an Alpha-Beta BAM, we will define the following.

In this work we will assume that Alpha-Beta associative memories have a fundamental set denoted by and *,* with , *, **, * and min. Also, it holds that all input patterns are different; M that is, if and only if . If for all it holds that , the Alpha-Beta memory will be *autoassociative*; if on the contrary, the former affirmation is negative, that is, for which it holds that , then the Alpha-Beta memory will be *heteroassociative*.

*Definition 3.3 (One-Hot). *Let the set be and *, *, *, *such that *.* The th one-hot vector of bits is defined as vector for which it holds that the th component is and the set of the components are , for all

*Remark 3.4. *In this definition, the value is excluded since a one-hot vector of dimension 1, given its essence, has no reason to be.

*Definition 3.5 (Zero-Hot). *Let the set be and *, *, , such that . The th zero-hot vector of bits is defined as vector for which it holds that the th component is and the set of the components are for all

*Remark 3.6. *In this definition, the value is excluded since a zero-hot vector of dimension 1, given its essence, has no reason to be.

*Definition 3.7 (Expansion vectorial transform). *Let the set be and *, **.* Given two arbitrary vectors and , the expansion vectorial transform of order *, *, is defined as , a vector whose components are for and for *. *

*Definition 3.8 (Contraction vectorial transform). *Let the set be and , such that *.* Given one arbitrary vector *,* the contraction vectorial transform of order *, *, is defined as , a vector whose components are * for **. *

In both directions, the model is made up by two stages, as shown in Figure 2.

For simplicity, first will be described the process necessary in one direction, in order to later present the complementary direction which will give bidirectionality to the model (see Figure 3).

The function of Stage 2 is to offer a as output given an as input*. *

Now we assume that as input to Stage 2 we have one element of a set of orthonormal vectors. Recall that the *Linear Associator* has perfect recall when it works with orthonormal vectors. In this work, we use a variation of the *Linear Associator in order to obtain *, parting from a *one-hot* vector in its th coordinate.

For the construction of the modified Linear Associator, its learning phase is skipped and a matrix representing the memory is built. Each column in this matrix corresponds to each output pattern . In this way, when matrix is operated with a one-hot vector , the corresponding will always be recalled.

The function of Stage 2 is to offer a as output given an as input*. *

###### 3.3.1. **Theoretical Foundation of Stages 1 and 3**

Below are presented 5 theorems and 9 lemmas with their respective proofs, as well as an illustrative example of each one. This mathematical foundation is the basis for the steps required by the complete algorithm, which is presented in Section 3.3.2. These theorems and lemmas numbering corresponds to the numeration used in [23].

By convention, the symbol will be used to indicate the end of a proof.

Theorem 3.9. *Let be the fundamental set of an autoassociative Alpha-Beta memory of type max represented by , and let be a pattern altered with additive noise with respect to some fundamental pattern with . Let us assume that during the recalling phase, is presented to memory as input, and let us consider an index . The th component recalled is precisely if and only if it holds that , dependant on ω and , such that .*

*Proof. *) By hypothesis we assume that . By contradiction, now suppose false that such that . The former is equivalent to stating that for all , which is the same to saying that for all . When we take minimums at both sides of the inequality with respect to index , we have
and this means that , which contradicts the hypothesis.

) Since the conditions of Theorem 3.1 hold for every , we have that ; that is, it holds that , for all . When we fix indexes and such that (which depends on *ω* and ), we obtain the desired result: .

Lemma 3.10. *Let be the fundamental set of an autoassociative Alpha-Beta memory of type max represented by , with for , and let be a version of a specific pattern , altered with additive noise, being the vector defined as . If during the recalling phase is presented to memory , then component will be recalled in a perfect manner; that is, .*

*Proof. *This proof will be done for two mutually exclusive cases.*Case 1. *Pattern has one component with value 0. This means that such that ; also, due to the way vector is built, it is clear that . Then , and since the maximum allowed value for a component of memory is 2, we have . According to Theorem 3.9, is perfectly recalled.*Case 2. *Pattern does not contain a component with value 0. That is, for all . This means that it is not possible to guarantee the existence of a value such that , and therefore Theorem 3.9 cannot be applied. However, we will show the impossibility of The recalling phase of the autoassociative Alpha-Beta memory of type max , when having vector as input, takes the following form for the th recalled component:
Due to the way vector is built, besides , it is important to notice that for all , and from here we can establish that the following
is different from zero regardless of the value of . According to for all , we can conclude the impossibility of
being zero. That is, .

Theorem 3.11. *Let be the fundamental set of an autoassociative Alpha-Beta memory of type max represented by , with for , and let be a pattern altered with additive noise with respect to some specific pattern , with being the vector defined as . Let us assume that during the recalling phase, is presented to memory as input, and the pattern is obtained. If when taking vector as argument, the contraction vectorial transform is done, the resulting vector has two mutually exclusive possibilities: such that , or is not a one-hot vector.*

*Proof. *From the definition of contraction vectorial transform, we have that for , and in particular, by making we have . However, by Lemma 3.10 , and since , the value is equal to the value of component . That is, . When considering that , vector **r** has two mutually exclusive possibilities: it can be that for all in which case , or happens that for which , in which case it is not possible that is a one-hot vector, given Definition 3.3.

Theorem 3.12. *Let be the fundamental set of an autoassociative Alpha-Beta memory of type min represented by , and let be a pattern altered with subtractive noise with respect to some fundamental pattern with . Let us assume that during the recalling phase, is presented to memory as input, and consider an index . The th recalled component is precisely if and only if it holds that , dependant on and , such that .*

*Proof. *** ⇒**) By hypothesis, it is assumed that . By contradiction, now let suppose it is false that such that . That is to say that for all , , which is in turn equivalent to for all , . When taking the maximums at both sides of the inequality, with respect to index , we have
and this means that , an affirmation which contradicts the hypothesis.

*⇐*) When conditions for Theorem 3.2 [19] are met for every , we have . That is, it holds that for all. When indexes and are fixed such that and , depending on and , we obtain the desired result .

Lemma 3.13. *Let be the fundamental set of an autoassociative Alpha-Beta memory of type min represented by , with for , and let be a pattern altered with subtractive noise with respect to some specific pattern , being a vector whose components have values , and the vector defined as . If during the recalling phase, is presented to memory , then component is recalled in a perfect manner. That is, .*

*Proof. *This proof will be done for two mutually exclusive cases. *Case 1. *Pattern has one component with value 1. This means that such that . Also, due to the way vector is built, it is clear that . Because of this, and, since the minimum allowed value for a component of memory is 0, we have . According to Theorem 3.12, is perfectly recalled.*Case 2. *Pattern **G** has no component with value 1; that is, for all . This means that it is not possible to guarantee the existence of a value such that , and therefore Theorem 3.12 cannot be applied. However, let us show the impossibility of . Recalling the phase of the autoassociative Alpha-Beta memory of type min with vector as input takes the following form for the th recalled component:
Due to the way vector is built, besides that , it is important to notice that for all , and from here we can state that
is different from 2 regardless of the value of . Taking into account that for all , we can conclude that it is impossible for
to be equal to 1. That is, .

Theorem 3.14. *Let be the fundamental set of an autoassociative Alpha-Beta memory of type min represented by , with for , and let be a pattern altered with subtractive noise with respect to some specific pattern , with being a vector whose components have values , and the vector defined as . Let us assume that during the recalling phase, is presented to memory as input, and the pattern is obtained as output. If when taking vector as argument, the contraction vectorial transform is done, the resulting vector has two mutually exclusive possibilities: such that , or is not a one-hot vector.*

*Proof. *From the definition of contraction vectorial transform, we have that for , and in particular, by making we have . However, by Lemma 3.13 , and since , the value is equal to the value of component . That is, . When considering that , vector **s** has two mutually exclusive possibilities: it can be that for all in which case ; or it holds that , for which , in which case it is not possible for **s** to be a zero-hot vector, given Definition 3.5.

Lemma 3.15. *Let be the fundamental set of an autoassociative Alpha-Beta memory of type max represented by , with for all . If t is an index such that , then for all .*

*Proof. *In order to establish that for all , given the definition of , it is enough to find, for each for all , an index for which in the expression that produces the th component of memory , which is . Due to the way each vector for is built, and given the domain of index , for each exists such that . This is why two useful values to determine the result are and , because . Then, , a value which is different from 0. That is, for all .

Lemma 3.16. *Let be the fundamental set of an autoassociative Alpha-Beta memory of type max represented by , with for , and let be an altered version, by additive noise, of a specific pattern , with being the vector defined as . Let us assume that during the recalling phase, is presented to memory as input. Given a fixed index such that , it holds that if and only if the following logic proposition is true: for all .*

*Proof. *Due to the way vectors and are built, we have that is the component with additive noise with respect to component .

) There are two possible cases.*Case 1. *Pattern does not contain components with value 0. That is, . This means that the antecedent of proposition is false, and therefore, regardless of the truth value of consequence , the expression for all is true.*Case 2. *Pattern contains at least one component with value 0. That is, such that . By hypothesis, , which means that the condition for a perfect recall of is not met. In other words, according to Theorem 3.9 expression *¬*[ such that ] is true, which is equivalent to
In particular, for , and taking into account that , this inequality ends up like this: . That is, , and therefore the expression for all () is true.) Assuming the following expression is true for all (), there are two possible cases.*Case 1. *Pattern does not contain components with value 0. That is, for all . When considering that , according to the definition of * β*, it is enough to show that for all , which is guaranteed by Lemma 3.15. Then, it has been proven that .

*Case 2.*Pattern contains at least one component with value 0. That is, such that . By hypothesis we have that for all , and, in particular, for and , which means that .

Corollary 3.17. *Let be the fundamental set of an autoassociative Alpha-Beta memory of type max represented by , with for , and let be an altered version, by additive noise, of a specific pattern , with being the vector defined as . Let us assume that during the recalling phase, is presented to memory as input. Given a fixed index such that , it holds that if and only if the following logic proposition is true: for all , *

*Proof. *In general, given two logical propositions and , the proposition ( if and only if ) is equivalent to proposition (*¬* if and only if *¬*). If is identified with equality and *Q* with expression for all (), by Lemma 3.16 the following proposition is true: *¬* if and only if *¬*[for all . This expression transforms into the following equivalent propositions:

Lemma 3.18. *Let be the fundamental set of an autoassociative Alpha-Beta memory of type min represented by , with for all . If t is an index such that , then for all .*

*Proof. *In order to establish that for all , given the definition of , it is enough to find, for each , an index for which in the expression leading to obtaining the th component of memory , which is . In fact, due to the way each vector for is built, and given the domain of index , for each exists such that ; therefore two values useful to determine the result are and , because , then , a value different from 2. That is, for all .

Lemma 3.19. *Let be the fundamental set of an autoassociative Alpha-Beta memory of type min represented by , with for , and let be an altered version, by subtractive noise, of a specific pattern , with being a vector whose components have values , and the vector defined as . Let us assume that during the recalling phase, is presented to memory as input. Given a fixed index such that , it holds that , if and only if the following logical proposition is true for all .*

*Proof. *Due to the way vectors and are built, we have that is the component with subtractive noise with respect to component .

) There are two possible cases.*Case 1. *Pattern does not contain components with value 1. That is, for all . This means that the antecedent of logical proposition is false and therefore, regardless of the truth value of consequent , the expression for all () is true.*Case 2. *Pattern contains at least one component with value 1. That is, such that . By hypothesis, , which means that the perfect recall condition of is not met. In other words, according to Theorem 3.12, expression *¬*[ such that ] is true, which in turn is equivalent to
In particular, for and considering that , this inequality yields . That is, * λ_{tr}* = 0, and therefore the expression for all () is true.) Assuming the following expression to be true, for all (), there are two possible cases.

*Case 1.*Pattern

**G**does not contain components with value 1. That is, for all . When considering that , according to the

*definition, it is enough to show that for all , , which is guaranteed by Lemma 3.18. Then, it is proven that .*

*β**Case 2.*Pattern contains at least one component with value 1. That is, such that . By hypothesis we have that for all () and, in particular, for and , which means that .

Corollary 3.20. *Let be the fundamental set of an autoassociative Alpha-Beta memory of type min represented by , with for , and let be an altered version, by substractive noise, of a specific pattern , with being a vector whose components have values , and the vector defined as . Let us assume that during the recalling phase, is presented to memory as input. Given a fixed index such that , it holds that if and only if the following logic proposition is true: .*

*Proof. *In general, given two logical propositions and , the proposition ( if and only if ) is equivalent to proposition *¬* if and only if *¬*). If *P* is identified with equality and with expression for all , by Lemma 3.19, the following proposition is true: . This expression transforms into the following equivalent propositions:

Lemma 3.21. *Let be the fundamental set of an autoassociative Alpha-Beta memory of type max represented by , with for , and let be the fundamental set of an autoassociative Alpha-Beta memory of type min represented by , with for all . Then, for each such that , with , it holds that and for all .*

*Proof. *Due to the way vectors and are built, we have that and , besides and for all such that . Because of this, and using the definition of , and , which implies that, regardless of the values of and it holds that , from whence
We also have and , which implies that, regardless of the values of and it holds that , from whence
, for all .

Corollary 3.22. *Let be the fundamental set of an autoassociative Alpha-Beta memory of type represented by , with for all , and let be the fundamental set of an autoassociative Alpha-Beta memory of type represented by , with for all . Then, , for all , , with and for all .*

*Proof. *Let and be two indexes arbitrarily selected. By Lemma 3.21, the expressions used to calculate the th components of memories and take the following values:
Considering that for all , there are two possible cases.*Case 1 (). *We have the following values: and , therefore .*Case 2 (). *We have the following values: and , therefore .

Since both indexes and were arbitrarily chosen inside their respective domains, the result is valid for all and for all .

Lemma 3.23. *Let be the fundamental set of an autoassociative Alpha-Beta memory of type max represented by , with for all , and let be the fundamental set of an autoassociative Alpha-Beta memory of type min represented by , with for all . Also, if we define vector as , and take a fixed index for all , let us consider two noisy versions of pattern : vector which is an additive noise altered version of pattern , and vector , which is a substractive noise altered version of pattern , with being a vector whose components take the values for all . If during the recalling phase, is presented as input to memory and is presented as input to memory , and if also it holds that for an index , being fixed such that , then .*

*Proof. *Due to the way vectors , and are built, we have that is the component in the vector with additive noise corresponding to component , and is the component in the vector with subtractive noise corresponding to component . Also, since , we can see that that is, and . There are two possible cases.*Case 1. *Pattern does not contain any component with value 0. That is, for all . By Lemma 3.15 for all , then for all , which means that . In other words, expression is false. The only possibility for the theorem to hold is for expression to be false too. That is, we need to show that . According to Corollary 3.20, the latter is true if for every with , exists such hat ( AND ). Now, indicates that , such that , and by Lemma 3.21 for all , for all , from where we have , and by noting the equality , it holds that
On the other side, for all the following equalities hold: and and also, taking into account that , it is clear that such that , meaning and therefore,
Finally, since for all it holds that , in particular , then we have proven that for every with , exists such that ( and ), and by Corollary 3.20 it holds that , thus making expression be false.*Case 2. *Pattern contains, besides the components with value of 1, at least one component with value 0. That is, such that . Due to the way vectors and are built for all , and, also, necessarily and thus . By hypothesis, being fixed such that and , and by Lemma 3.19 for all . Given the way vector is built we have that for all , , thus making the former expression like this: for all . Let be a set, proper subset of , defined like this: . The fact that is a proper subset of is guaranteed by the existence of . Now, indicates that , such that , and by Lemma 3.21 and for all , from where we have that for all , because if this was not the case, . This means that for each , which in turn means that patterns and coincide with value 1 in all components with index . Let us now consider the complement of set , which is defined as . The existence of at least one value for which and is guaranteed by the known fact that . Let us see, if for all then for all it holds that , which would mean that . Since for which and , this means that for which and . Now, , and finally

Lemma 3.24. *Let be the fundamental set of an autoassociative Alpha-Beta memory of type max represented by , with for all , and let be the fundamental set of an autoassociative Alpha-Beta memory of type min represented by , with for all . Also, if we define vector as , and take a fixed index for all , let us consider two noisy versions of pattern : vector which is an additive noise altered version of pattern , and vector , which is a subtractive noise altered version of pattern , with being a vector whose components take the values for all . If during the recalling phase, is presented as input to memory and is presented as input to memory , and if also it holds that for an index , being fixed such that , then .*

*Proof. *Due to the way vectors , and are built, we have that is the component in the vector with additive noise corresponding to component , and is the component in the vector with subtractive noise corresponding to component . Also, since , we can see that that is, and . There are two possible cases.*Case 1. *Pattern **G** does not contain any component with value 1. That is, for all . By Lemma 3.18 for all ; thus for all , which means that . In other words, expression is false. The only possibility for the theorem to hold is for expression to be false too. That is, we need to show that . According to Corollary 3.17, the latter is true if for every with , exists such that ( AND ). Now, indicates that , such that , and by Lemma 3.19 for all , for all , from where we have , and by noting the equality , it holds that
On the other side, for all the following equalities hold: and and also, taking into account that , it is clear that such that , meaning and therefore,
Finally, since for all it holds that , in particular , then we have proven that for every with , exists such that (