Abstract

The anatomy of model species is described in ontologies, which are used to standardize the annotations of experimental data, such as gene expression patterns. To compare such data between species, we need to establish relationships between ontologies describing different species. Ontology matching is a kind of solutions to find semantic correspondences between entities of different ontologies. Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. We combine several different matching strategies through first-order logic formulas according to the structure of anatomy ontologies. Experiments on the adult mouse anatomy and the human anatomy have demonstrated the effectiveness of proposed approach in terms of the quality of result alignment.

1. Introduction

Ontological techniques have been widely applied to medical and biological research [1]. The anatomy of model species is described in ontologies, which are used to standardize the annotations of experimental data, such as gene expression patterns. Such ontologies of anatomy and development facilitate the organization of functional data pertaining to a species. To compare such data between species, we need to establish relationships between ontologies describing different species [2]. For example, all gene expression patterns described in ZFIN (the Zebrafish Model Organism Database) are annotated using the zebrafish anatomy ontology. A list of such ontologies is kept on the Open Biomedical Ontologies (OBO) website [3].

Heterogeneity is an inherent characteristic of ontologies developed by different parties for the same (or similar) domains. Semantic heterogeneity has become one of the main obstacles to sharing and interoperation among heterogeneous ontologies. Ontology matching, which finds semantic correspondences between entities of different ontologies, is a kind of solutions to the semantic heterogeneity problem [4]. The matching techniques can be classified in a first level as element-level techniques and structure-level techniques. Element-level techniques obtain the correspondences by considering the entities in the ontologies in isolation, therefore ignoring that they are part of the structure of the ontology. Structure-level techniques obtain the correspondences by analyzing how the entities fit in the structure of the ontology [5].

Recently, probabilistic approaches to ontology matching which compare ontology entities in a global way have produced competitive matching result [69]. OMEN [6] was the first approach that uses a probabilistic representation of ontology mapping rules and probabilistic inference to improve the quality of existing ontology mappings. It uses a Bayesian net to represent the influences between potential concept mappings across ontologies. Based on OMEN, Albagli et al. [7] introduced a novel probabilistic scheme iMatch for ontology matching by using Markov networks rather than Bayesian networks with several improvements. The iMatch better supports the noncausal nature of the dependencies for using undirected networks. Niepert et al. [8] presented a probabilistic-logical framework for ontology matching based on Markov logic. Markov logic has several advantages over existing matching approach and provides a unified syntax that supports different matching strategies in the same language. Li et al. [9] improve the Markov logic model with match propagation strategy and user feedback. References [8, 9] have shown the effectiveness of Markov logic model on conference datasets.

In this paper, we consider the Markov logic based framework for anatomy ontology matching. We combine several different matching strategies through first-order logic formulas according to the structure of anatomy ontologies.

2. Materials

To evaluate the performance of our proposed approach, we conduct an experimental study using the adult mouse anatomy (2744 classes) and the NCI Thesaurus (3304 classes) describing the human anatomy, which are large and carefully designed ontologies. They also differ from other ontologies with respect to the use of specific annotations and roles, for example, the extensive use of the part_of relation. The two resources are part of the Open Biomedical Ontologies (OBO) [3]. We download the owl version of the two ontologies and the reference alignment (with 1516 correspondences) from OAEI anatomy track [10].

NCI Thesaurus published by the National Cancer Institute (NCI) contains the working terminology of many data systems in use at NCI. Its scope is broad as it covers vocabulary for clinical care as well as translational and basic research. Among its 37,386 concepts, 4,410 (11.8%) correspond to anatomical entities (anatomic structure, system, or substance hierarchy). Adult mouse anatomy ontology has been developed as part of the mouse Gene Expression Database (GXD) project to provide standardized nomenclature for anatomical entities in the postnatal mouse. It will be used to annotate and integrate different types of data pertinent to anatomy, such as gene expression patterns and phenotype information, which will contribute to an integrated description of biological phenomena in the mouse [11].

3. Methods

In this section, we present our Markov logic model for anatomy ontology matching. Our model deviates from [8, 9] in several important ways. First, we model the important hierarchy structure defined by the property of part_of, while previous works consider only subclass-superclass hierarchy. In contrast, our model does not model property correspondences for there are few properties definitions in anatomy ontologies. Another difference is in computing a priori similarities. For conference data sets, [8, 9] apply a similarity measure on the name of matchable entities. However, the class name in anatomy ontology is meaningless signature such as “NCI_C12877.” Therefore, we apply a similarity measure on the labels of classes.

We compute an alignment for anatomy ontologies through the following three steps. First, we compute a priori similarity based on Levenshtein distance between labels of two classes from different ontologies and apply a threshold to generate candidate matches. Then, we convert the representation of input ontologies to first-order logic predicate and define a set of formulas as matching strategy. Finally, we execute MAP inference in generated Markov networks as alignment process and output the optimal alignment. Our matching system architecture based on Markov logic networks is illustrated in Figure 1.

3.1. Markov Logic Networks

Markov logic networks [12] is a statistical relational learning language based on first-order logic and Markov networks. A set of formulas in first-order logic can be seen as a set of hard constraints on the set of possible worlds: if a world violates even one formula, it has zero probability. The basic idea in Markov logic is to soften these constraints: when a world violates one formula it is less probable, but not impossible. The fewer formulas a world violates, the more probable it is. Each formula has an associated weight that reflects how strong a constraint it is: the higher the weight, the greater the difference in log probability between a world that satisfies the formula and one that does not, other things being equal.

Definition 1. A Markov logic network is a set of pairs , where is a formula in first-order logic and is a real number. Together with a finite set of constants , it defines a Markov network as follows:(1) contains one binary node for each possible grounding of each predicate appearing in . The value of the node is 1 if the ground atom is true and 0 otherwise.(2) contains one feature for each possible grounding of each formula in . The value of this feature is 1 if the ground formula is true and 0 otherwise. The weight of the feature is associated with in .

An MLN can be viewed as a template for constructing Markov networks. Given different sets of constants, it will produce different networks, but all will have certain regularities in structure and parameters, given by the MLN (e.g., all groundings of the same formula will have the same weight). We call each of these networks a ground Markov network to distinguish it from the first-order MLN. From Definition 1, the probability distribution over possible worlds specified by the ground Markov network is given bywhere is the number of true groundings of in , is the state (true values) of the atoms appearing in , and .

In the context of ontology matching, possible worlds correspond to possible alignment and the goal is to determine the most probable alignment given the evidence. It was shown that Markov logic provides an excellent framework for ontology matching as it captures both hard logical axioms and soft uncertain statements about potential correspondences between ontological entities.

3.2. Ontology Representation

An ontology specifies a conceptualization of a domain in terms of classes and properties and consists of a set of axioms. Matching is the process of finding relationships or correspondences between entities from different ontologies. An alignment is a set of correspondences. A correspondence is a triple asserting that the relation holds between the ontology entities and , where is an entity from ontology and is an entity from ontology [4]. The generic form of correspondence captures a wide range of correspondences by varying what is admissible as matchable element and semantic relation, for example, equivalence (), more general (). In the following we are only interested in equivalence correspondence between classes across anatomy ontologies.

The two input ontologies are described in OWL (Web Ontology Language). Classes are concepts organized in a subclass-superclass hierarchy with multiple inheritances. The properties of is_a and part_of describe the part and whole relationship between two classes. The properties of disjointWith describes relationship between two classes which is interpreted as the emptiness of the intersection of their interpretations. For example, in OWL we can say that Plant and Animal are disjoint classes: no individual can be both a plant and an animal (which would have the unfortunate consequence of making SlimeMold an empty class). SaltwaterFish might be the intersection of Fish and the class SeaDwellers. Figure 2 depicts fragments of human and mouse anatomy ontologies.

We introduce a set of predicates to model the structure of ontologies to be matched. The defined predicates are shown in Table 1. We use predicate to represent a class from ontology . For example, (“NCI_C33854”) representing “NCI_C33854” is a class from ontology . We use predicate and to model the class hierarchy in ontology , for example, (“NCI_C33854”, “NCI_C25762”) and (“NCI_C33854”, “NCI_C12686”). The predicate models the disjointness relationship between two classes, for example, (“NCI_C21599”, “NCI_C25444”). The predicate (“NCI_C33854”, “Vascular_System”) represents class “NCI_C33854” with label “Vascular_System.” We also propose a predicate to represent the similarity between labels of two classes from different ontologies, for example, (“Vascular_Endothelium”, “blood vessel endothelium”, ), where is a real number. If we apply a similarity measure based on the Levenshtein distance [13], we have (“Vascular_Endothelium,” “blood vessel endothelium”) equal to 0.54. The application of a threshold is a standard technique in ontology matching. We only generate ground atoms of for those pairs of labels whose similarity is greater than . Correspondences with a similarity less than are deemed incorrect.

We differentiate between two types of predicates: hidden and observed. The ground atoms of observed predicates are seen and describe the knowledge encoded in the ontologies. The ground atoms of hidden predicates are not seen and have to be predicted using MAP inference. We use hidden predicates to model the sought-after class correspondences.

We use the following notation conventions in Table 1 and through the rest of this paper:(1)All entities from ontology have a subscript “1”; all entities from ontology have a subscript “2.”(2)Lowercase , , and with or without a subscript are a class.(3)Lowercase with or without a subscript is a label.

3.3. Matching Formulas

With predicates defined, we can now go on to incorporate our strategies about the task using weighted first-order logic formulas. Markov logic combines both hard and soft first-order formulas. This allows the inclusion of both known logical statements and uncertain formulas modeling potential correspondences and structural properties of the ontologies. Then it makes joint inference of two and more interdependent hidden predicates.

We will introduce five types of constraints to model different matching strategies, namely, a priori confidences, cardinality constraints, coherence constraints, stability constraints, and match propagation. The formula without a weight is a hard constraint and holds in every computed alignment. The formula with a weight is a soft constraint and the weight reflects how strong a constraint it is. For simplicity, we will from now on assume that the predicate is implicitly added as a precondition to every formula for each class appearing in the formula.

A Priori Confidences. We compute an initial a priori similarity for each pair of labels of two classes across ontologies based on the Levenshtein distance [13] and use a cut-off threshold to produce matching candidates, above which ground atoms of predicates are added to the ground Markov network. The higher the similarity between labels of two classes is, the more likely the correspondence between the two classes is correct. We introduce the following formula to model the a priori confidences of a correspondence:

Here, we use the similarity between labels as the formula weight since the confidence of a correspondence to be correct depends on how similar their labels are.

Cardinality Constraints. In general, alignments can be of various cardinalities: 1 : 1 (one to one), 1 : m (one to many), n : 1 (many to one), and m : n (many to many). In this work, we assume the one to one constraint. We use two hard formulas stating that one concept from ontology can be equivalent to at most one concept in ontology , which ensures the consistency of a computed alignment and vice versa:

Coherence Constraints. Coherence constraints reduce incoherence during the alignment process. These constraints formulas are added as hard formulas to ensure satisfaction in the computed result alignment. The following formulas describe that two disjoint classes of ontology will not match two classes of ontology with subclass relationship respective simultaneously and vice versa:

Stability Constraints. The idea of stability constraints is that an alignment should not introduce new structural knowledge. The formulas for stability constraints are soft formulas associated with weights reflecting how strong the constraints are. When an alignment violates one soft formula it is less probable, but not impossible. Formulas (5) and (6) decrease the probability of alignments that map concept to and to if is a subclass of but is not a subclass of :

Here, and are negative real-valued weights, rendering alignments that satisfy the formulas possibly but less likely.

Match Propagation. Generally speaking, if two concepts and match, and there is a relationship between and in and a matching relationship between and in , then we can increase the probability of match between and . This is accomplished by adding the following formulas to the model. Formula (7) states that if two classes match, it is more likely that their parent classes match too. Formula (8) describes that if parts of two classes match, it is more likely that the classes match too:

Here, and are positive real-valued weights, propagating alignment across the structure of ontologies. These formulas capture the influence of the ontology structure and the semantics of ontology relations and increase the probability of matches between entities that are neighbors of already matched entities in the two ontologies. These formulas help to identify correct correspondences and enable deriving missed correspondences based on the hypothesis.

3.4. MAP Inference as Alignment Process

After we generate all ground atoms of observed predicates introduced in previous section, we can select an optimal alignment from the incoming hypotheses using MAP inference in Markov logic networks generated by matching formulas. Give two ontologies, we compute the set of ground atoms of the hidden predicates that maximizes the probability given both the ground atoms of observed predicates and the ground formulas. Let be the set of ground atoms of observed predicates and let be the set of ground atoms of hidden predicates with respect to the given ontologies, we computewhere is the weight of formula and is the number of possible worlds where formula holds.

4. Results and Discussion

4.1. Experimental Setup

We conducted experiments that were implemented in java using the Jena API (jena.apache.org) and SecondString library [14] to create ground atoms and compute the similarity between labels based on Levenshtein distance. Then we applied theBeast [15] for MAP inference in Markov logic networks, using integer linear program (ILP) as base solver. theBeast is a software tool that provides means of inference and learning for Markov logic networks. Experiments were conducted on Fedora 7 with an Intel i5 [email protected] Ghz and 4 GB memory.

We evaluated our model for anatomy ontology matching with thresholds on the similarity ranging from 0.65 to 0.95. The weights of soft formulas are determined manually. Although the weights for formulas can be learned with an online learner, being able to set qualitative weights manually is crucial as training data is often unavailable. Further, learning weights from reference alignment as training data would lead to results overfitting the data. We set the weights for stability constraints dealing with class hierarchy to −0.01 and set the weight for match propagation to 0.05 based on the consideration that they are reciprocal ideas with stability constraints, hence with roughly equivalent importance.

We evaluated five different settings:prior: the formulation includes only a priori confidence.ca: the formulation includes a priori confidence and cardinality constraints.ca + co: the formulation includes a priori confidence, cardinality, and coherence constraints.ca + co + st: the formulation includes a priori confidence, cardinality constraints, coherence constraints, and stability constraints.ca + co + st + mp: the formulation includes a priori confidence, cardinality constraints, coherence constraints, stability constraints, and matching propagation.

4.2. Experimental Results

We use precision, recall, and F-measure to measure the performance of the matching results. Given the reference alignment, we compute the precision as the number of correct correspondences over the total number of correspondences in the computed alignment. We compute the recall as the number of correct correspondences over the number of correspondences in the reference alignment. Then, we compute the F-measure as

Figure 3 compares precision, recall, and F-measure scores of generated alignments over the reference alignment for thresholds ranging from 0.65 to 0.95 under different settings. From Figure 3, we can see that our method achieves the highest precision in the setting of ca + co + st + sp, while achieving the highest recall in the setting of priori. We obtain significant improvement on -measure when adding more matching formulas into the model. We also note that there is no obvious difference between ca and ca + co. It is because only the human anatomy ontology defines the relationships of disjointWith. However, we keep coherence constraints in our model since it can further improve the quality of results if the relationships of disjointWith were added into the mouse anatomy ontology in the future. Overall, the precision increases with the growth of the threshold, while the recall slightly decreases for higher thresholds in various settings. The margins between different settings become smaller for higher thresholds than for lower thresholds. It is because there is only a small number of incorrect correspondences in candidates when we apply a threshold greater than 0.8. We achieve the maximum -measure score at threshold 0.8.

We manually sample several false positive correspondences and false negative correspondences to analysis. We found that false positive correspondences were mainly caused by similar labels in spelling. For example, false correspondence (“NCI_C33592”, “MA_0002058”) has similar labels of “Spiral_Artery” and “sural artery.” Furthermore, the superclass of “NCI_C33592” (“NCI_C12372”) and the superclass of “MA_0002058” (“MA_0002058”) happen to be matched, while false positive correspondences were mainly caused by the dissimilarity of labels, such as “Tarsal_Plate” for “NCI_C33736” and “eyelid tarsus” for “MA_0000270.” And “NCI_C33736” has no subclass and subpart; hence we cannot find the correspondence through formula (7) or (8).

Figure 4 is a comparison of the performance of our method and participating systems of OAEI 2014 which also produce coherent alignment in anatomy track. From Figure 4, we can see that our method (MLN-OM) outperforms most of systems and is comparable with the best system (LogMapLite). Notice that we use a simple similarity measure based on Levenshtein distance in pruning phase and focus on the Markov logic model for ontology matching, while LogMapLite uses an external lexicon (e.g., WordNet or UMLS-lexicon) in the phase of computing an initial set of equivalence anchor mappings, which can be easily adopted by our method in the pruning phase to further improve the quality of matching results.

5. Conclusions

In this paper, we propose a Markov logic model for anatomy ontology matching. The model combines five types of matching strategies, namely, a priori confidences, cardinality constraints, coherence constraints, stability constraints, and match propagation. Experimental results demonstrate the effectiveness of the proposed approach.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

The research was partially supported by the National Natural Science Foundation of China under Grant nos. 61170020 and 61402311, Jiangsu Province Colleges and Universities Natural Science Research Project under Grant no. 13KJB520021, Jiangsu Province Science and Technology Support Program under Grant no. BE2012075, Jiangsu Province Technology Innovation Fund Project for Science and Technology Enterprises under Grant no. BC2013124, Suzhou City Science and Technology Support Program under Grant no. SG201257, 2013 Suzhou Municipal Special Fund Project for Speeding up the Information Construction, and the Program for Graduate Research Innovation of Jiangsu Province no. CXZZ13_0813.