Abstract

In recent years, knowledge representation in the Artificial Intelligence (AI) domain is able to help people understand the semantics of data and improve the interoperability between diverse knowledge-based applications. Semantic Web (SW), as one of the methods of knowledge representation, is the new generation of World Wide Web (WWW), which integrates AI with web techniques and dedicates to implementing the automatic cooperations among different intelligent applications. Ontology, as an information exchange model that defines concepts and formally describes the relationships between two concepts, is the core technique of SW, implementing semantic information sharing and data interoperability in the Internet of Things (IoT) domain. However, the heterogeneity issue hampers the communications among different ontologies and stops the cooperations among ontology-based intelligent applications. To solve this problem, it is vital to establish semantic relationships between heterogeneous ontologies, which is the so-called ontology matching. Ontology metamatching problem is commonly a complex optimization problem with many local optima. To this end, the ontology metamatching problem is defined as a multiobjective optimization model in this work, and a multiobjective particle swarm optimization (MOPSO) with diversity enhancing (DE) (MOPSO-DE) strategy is proposed to better trade off the convergence and diversity of the population. The well-known benchmark of the Ontology Alignment Evaluation Initiative (OAEI) is used in the experiment to test MOPSO-DE’s performance. Experimental results prove that MOPSO-DE can obtain the high-quality alignment and reduce the MOPSO’s memory consumption.

1. Introduction

In recent years, knowledge representation in the Artificial Intelligence (AI) domain is able to help people understand the semantics of data and improve the interoperability between diverse knowledge-based applications. Semantic Web (SW) [14], as one of the methods of knowledge representation, is the new generation of World Wide Web (WWW), which integrates AI with web techniques and dedicates to implementing the automatic cooperations among different intelligent applications. Ontology, as an information exchange model that defines concepts and formally describes the relationships between two concepts, is the core technique of SW, implementing semantic information sharing and data interoperability in the Internet of Things (IoT) domain [5]. However, one concept may be described with different terminologies in various specific fields, yielding the heterogeneity problem among different ontologies [6]. The heterogeneity issue hampers the communications among different ontologies and stops the cooperations among ontology-based intelligent applications. If there is incapability of sharing knowledge for the sake of the final goal of SW, machines could not cooperate with each other. Therefore, the ontology heterogeneity issue directly affects the development of SW. To solve this problem, it is vital to establish semantic relationships, such as classes and properties for correspondences between heterogeneous ontologies so as to find the identical entity mappings, which is the so-called ontology matching. Similarity measures, which represent the core technology of the ontology matching [7], are used to compute the similarity value between two entities. The selection, combination, and tuning are three components of the similarity integration. Among them, selection is of importance for the integration, for some contradictory results are not possible to be integrated. Since only using a single similarity measure fails to ensure the confidence on all heterogeneous scenarios, various similarity measures are integrated to obtain a satisfactory alignment [8]. Ontology metamatching problem is aimed at how to choose the decent similarity measures, assign appropriate weights for them, and how to verify the alignment by removing the incorrect correspondences to enhance the quality of matching results, which is commonly a complex optimization problem with many local optima [9].

Although there have been many studies on solving the ontology matching problems with single-objective optimization strategies [10], it only optimizes one of the two conflicting objectives, namely, recall or precision. Optimizing recall (or precision) will result in decreasing precision (or recall), resulting in bias improvement. Due to the alignment’s f-measure that comprehensively considers these two objectives, it is necessary to trade off two conflicting objectives at the same time to achieve better results. However, the research on the multiobjective ontology matching technology is still in its infancy. To this end, MOPSO is used in this work to improve the alignment’s quality. Most MOPSO algorithms use an external archive to save local and global best particles, greatly increasing the memory consumption when there are multiple local optimal solutions on the Pareto front, and sometimes these solutions cannot fully converge to the real Pareto fronts. For MOPSO, how to enable the solutions to fast converge to the Pareto fronts (PF) and how to make solutions evenly distributed on the true PF are two fundamental issues to be addressed. The search space of the solutions is greatly cut down by the local and global best particles used to guide the update of solutions at current generation, and MOPSO tends to be trapped into local optima due to its fast convergence and uneven distribution. Therefore, enhancing the population diversity to reduce the probability of premature convergence is vital to the performance of MOPSO. In recent years, decomposition-based archiving approach for MOPSO has become a popular method to balance convergence and diversity of the population [11], but it still faces great challenges and increases memory consumption in dealing with complex optimization problems with multiple local optima, such as the ontology metamatching problem. Multiobjective particle swarm optimization for feature selection with fuzzy cost [12] also has good performance in improving the diversity of population. This method defines a fuzzy crowding distance measure to save candidate solutions and determine the global best particles in the archive. To better balance the convergence and diversity of the population and save the algorithm’s memory consumption, an improved MOPSO algorithm for enhancing population diversity in MOPSO is proposed in this work to better guide the update of the solutions in the swarm. Since the particle swarm optimization (PSO) is a popular strategy to solve the ontology matching problem [13], this paper proposes a multiobjective PSO based on diversity enhancing strategy (MOPSO-DE) to solve it. In particular, MOPSO-DE uses a diversity enhancing strategy to efficiently improve the alignment’s quality. To be specific, the contributions of this paper are as follows: (1) a multiobjective optimization model is constructed for the ontology metamatching problem; (2) a general multiobjective optimization framework is presented for solving the ontology metamatching problem to evaluate the alignment’s quality; and (3) a MOPSO-DE is proposed to solve the ontology metamatching problem efficiently.

The rest of the paper is organized as follows: Section 2 presents the development of the existing swarm intelligence algorithm-based ontology matching techniques. Section 3 describes the related concepts of the ontology matching problem and the mathematical optimization model. Section 4 elaborates the implementation details of MOPSO-DE, Section 5 presents the experimental results and analysis, and Section 6 summarizes the work of this paper and determines the direction of following work.

2. Swarm Intelligence Algorithm-Based Ontology Matching Technique

Due to the complex optimization problem in the ontology matching domain, the Swarm algorithms (SI) algorithms, such as Brain Storm Optimization (BSO) algorithm [14], Parallel Compact Differential Evolution (PCDE) algorithm [15], Compact Genetic Algorithm (CGA) [16], Artificial Bee Colony (ABC) algorithm [17], and Evolutionary Algorithm (EA) [18], have become popular methods to integrate heterogeneous ontologies.

Martinez-Gil and Aldana-Montes are the first to propose Genetics for Ontology Alignments (GOAL) [19] to determine the suitable aggregating weights generated by each similarity measure using the Evolutionary Algorithm (EA). Alexandru-Lucian and Iftene [20] optimize both parameters and the threshold in the matching process to further filter unauthentic results in the matching process. Acampora et al. [21] improve the quality of the solution and the convergence speed of EA through a local search process. Xue and Wang [22] utilize a new metric to determine the weight required by several pairs of ontologies in the matching process at a time in order to approximately measure the alignment’s f-measure and guide the search direction of the algorithm. He et al. [23] propose an optimization method named Artificial Bee Colony (ABC) algorithm to integrate diverse similarity measures in the matching process to improve the alignment’s quality. Xue et al. [24] use NSGA-III [25] to integrate diverse similarity measures in the matching process. However, as the scale of the similarity measures increases, the quality decreases. These proposals need distributions of memory space to store the similarity matrices determined by similarity measures, which increases the space complexity of the algorithm and thus cannot obtain high-quality alignment. To solve this problem, the Genetic Algorithm based Ontology Matching (GAOM) [26] uses EA to find the optimal entity matching pairs to achieve high-quality matching results. Alves et al. [27] propose a hybrid GA, which combines the EA with a local search strategy to match instance information and determine the optimal concept mapping. Recently, Chu et al. [28] propose model ontology in vector space and propose a Compact Evolutionary Algorithm (CEA) to determine the optimal alignments.

Although the above ontology matching methods based on SI to integrate heterogeneous ontologies can ensure the quality of alignment, the convergence speed of the algorithm is slow. Compared with the SI mentioned above in the ontology matching domain, PSO has the advantage that there is only one-way information flow; that is, all particles can converge quickly. MapPSO [29] addresses the ontology matching problem by using PSO and introduces a new metric based on statistical results to approximately evaluate the alignment’s quality. Huang et al. [30] propose a compact PSO (cPSO) for sensor ontology matching on Artificial Internet of Things (AIoT). However, these proposals all have the drawback of premature convergence so that the global optimal solution cannot be found to obtain a high-quality alignment. To overcome this drawback, multiobjective PSO (MOPSO) is introduced to the ontology matching domain. Xue et al. [31] propose a compact MOPSO applied in large-scale biomedical ontology domain.

There exist a number of local optimal solutions in the ontology metamatching problem, and infinite candidate solutions can be found in the search space [9]. One of the main concerns in the existing MOPSO in terms of the ontology metamatching domain is how to enhance population diversity to improve the alignment’s quality. To this end, this work puts forward an improved MOPSO to get rid of an external archive with a high computational cost, and it utilizes a diversity enhancing strategy to strike a balance between convergence and diversity.

3. Preliminaries

3.1. Ontology and Ontology Alignment

Definition 1. An ontology is defined as a 3-tuple [32]. where represents a collection of objects targeted at a certain domain, e.g., “book” can be interpreted as a class of all book objects in a library; represents the set of relationships between two objects, e.g., “has author” is a relationship between the “book” object and the “author” object; and represents the collection of specific individual instances corresponding to instance objects, e.g., “Computer Book” is an instance of the class ‘book.” Figure 1 presents an example of two ontologies under alignment. The classes, properties, and instances of ontology are called entities. The alignment of two ontologies is the process of finding correspondences between entities. In addition, rounded rectangles stand for classes, e.g., “Vehicle,” and one-way arrows are properties between two objects, e.g., “is a.” To ensure the semantic interoperation between different systems, ontology is needed to describe semantic relations. Due to ontology designers’ subjectivity, the concepts of ontology may have different descriptions. Thus, the heterogeneity problem of ontology is presented. To solve this problem, it is necessary to find out the mappings between the concepts of ontology, i.e., the so-called ontology alignment.

Definition 2. Ontology alignment is a set of correspondences between entities, which is defined as a 4-tuple. where and are, respectively, the entities of source and target ontologies, is the similarity in the range [0,1], is the relation between and . In Figure 1, the double-sided arrows establish correspondences between the two entities. For example, a connection between “Wheeled” in ontology 1 and “Wheeled Vehicle” in ontology 2 is established with a similarity of 0.82. The value of similarity represents the confidence, for example, 0 means the nonequivalence relation and 1.0 is the equivalence relation.

Definition 3. The process of ontology alignment is usually defined as a function. where is the final matching result; and are source ontology and target ontology to be matched; is the reference alignment; is the set of parameters such as weights and thresholds; and is an external resource, such as an external dictionary, e.g., WordNet.

Figure 2 shows the process of ontology alignment: in the matching process, only the matching results’ confidence value greater than the threshold are considered to be authentic, so how to filter the unauthentic results is the key to the ontology alignment process, and the filtering process is finished by the similarity measures.

3.2. Similarity Measures

Generally, there are three kinds of similarity measures in the ontology matching field, i.e., linguistic-based, string-based, and structure-based.

Linguistic-based similarity measure uses the external digital dictionary, such as WordNet [33] to calculate the similarity value of two words by comparing their relation of hypernymy or whether they are synonymous. Wu and Palmer [34] is a classic similarity measure used in WordNet, which calculates the correlation of two words by considering the depth of the two synonym sets and the depth of the Least Common Subsumer (LCS). The formula of Wu and Palmer is defined as follows: where and are the strings to be matched. represents the closest common parent concept of and ; and , , and are depth of and and , respectively.

String-based similarity measure utilizes different distance calculation algorithms, such as N-gram [35], SMOA [20], Levenshtein [36], and Jaro-Winkler [37] distance. Because of the superior performance of N-gram in comparing the similarity of two strings [35], this paper uses N-gram to calculate the similarity value of strings in the field of ontology alignment, and its formula is as follows: where is the amount of common substrings of and and and are the number of substrings, respectively. According to the paper [38], the best performance can be achieved when the number of substrings is 3.

Structure-based similarity measure mainly calculates the similarity value of two entities through the relationship between superclasses and subclasses. Matching entities are considered structurally similar if they have the same amount of superclasses and subclasses. In this paper, similarity is calculated based on the amount of superclasses and subclasses of entities in different ontologies, and the relevant formula is defined as follows:

3.3. Ontology Metamatching Problem

Since one similarity measure cannot guarantee the matching quality in all circumstances, various similarity measures are usually integrated to enhance the quality of matching results and get the final similarity value. In this work, the weighted average strategy is chosen to integrate different similarity measures, which is defined as follows: where , .

How to assign the appropriate weights to similarity measures and filtering weights of the ontology matching result for ontology alignment is referred to the ontology metamatching problem [19]. In addition, the weight tuning process requires a trade-off between two conflicting objectives, i.e., the alignment’s recall and precision, which are as follows: where and are reference alignment and the final alignment, respectively, formulated by experts in specific fields.

When equals 1, all correct matching pairs in the correspondences are found, while when equals 1, all matching pairs in the correspondences found are correct. To be specific, the multiobjective optimization model of the ontology metamatching problem in this paper is defined as follows: where and represent the weight sets and the filtering threshold, respectively; is the aggregating weights of the -th similarity measure; is the amount of similarity measures; and and calculate the recall value and precision value of the ontology metamatching results under the parameters and .

4. Multiobjective Particle Swarm Optimization Algorithm Based on Diversity Enhancing Strategy

To effectively solve the ontology metamatching problem, a framework of multiobjective particle swarm optimization algorithm based on diversity enhancing strategy for solving the ontology metamatching problem is proposed. In order to simplify the structure of the algorithm and reduce the influence of external archive on the performance of solving complex optimization problems, such as the ontology metamatching problem, a diversity enhancing strategy is utilized in this framework to search for the optimal solutions to obtain perfect alignment. The framework of MOPSO-DE for ontology metamatching is given in Figure 3, where , , and represent the similarity measures; , , and represent the corresponding similarity matrix; and , , and are the aggregating weights for the similarity matrix, respectively. is the matrix aggregated by , and ; is the filtering threshold to exclude unauthentic results; the multiobjective processing module, representing the core of the framework for the ontology metamatching problem, utilizes a MOPSO-DE algorithm to trade off the two conflicting objectives simultaneously, i.e., recall and precision, to evaluate the alignment’s quality.

In Figure 3, the final goal of MOPSO-DE for ontology metamatching is to obtain the corresponding similarity matrix for each similarity measure, assign the right weight and find a suitable threshold to filter unauthentic alignment, to trade-off the two conflicts evaluation metrics, namely, recall and precision, and then, with the help of the use of the diversity enhancing strategy, to ensure the perfect alignment.

4.1. Encoding Mechanism

In this paper, a decimal coding scheme [13] is used to encode each solution, where each particle consists of the weight set and a threshold. Figure 4 shows an example of encoding aggregating weights.

and are the cut points used to get the weights. Two cut points are needed to get three subintervals on one interval. cut points are used to represent the aggregating weights of similarity measures, and the sum of the weights is equal to 1, which ensures the efficiency of the algorithm. This coding mechanism ensures the allocation of the weight sets. In particular, the encoding process is as follows: (1) randomly generate real numbers in [0,1], marked as , , …, , and , respectively; (2) sort the first cut points , , …, and in ascending order to get , , …, and , and is the threshold to filter invalid results; and (3) calculate the aggregating weights according to the following equation:

4.2. Diversity Enhancing Strategy

The traditional MOPSO algorithm requires the external archive to save information of particles, which increases the memory consumption of the algorithm [13]. To improve the algorithm’s efficiency, in this work, we use the diversity enhancing strategy to execute the evolutionary process. Diversity enhancing strategy is introduced to trade off convergence and diversity. In the proposed MOPSO-DE, pairwise particles at each generation guide the update of the particles instead of the local and global best particles. In particular, two particles are first randomly selected from the elite set to compete in pairs, and then, the particles with better recall and precision value are marked as the winner particles, which guide the update of the loser particles at each generation. To better illustrate the principle of the diversity enhancing strategy, Figure 5 is given to present its framework.

MOPSO cannot solve the ontology metamatching problem well to obtain good alignment because there are many optimal solutions which affect the performance of the algorithm. Therefore, the diversity enhancing strategy guides the update of particles, which no longer relies on optimal solutions and external archive, simplifying the structure of the algorithm, greatly reducing memory consumption, and ensuring the quality of the final alignment. Assuming that is the position variable of the winning particle and and are the velocity variable and position variable of the loser particle, respectively. For the -th particle in generation , the update formula on the loser particles in the -round competition is as follows: where and are two weight vectors. As can be seen from the update formula of loser particles in the -round competition, the position update formula of particles adopts the position update formula of the classic PSO, and the diversity enhancing strategy affects the update of velocity of loser particles in the competition. The first part of the velocity update formula is consistent with the inertia term of the classic PSO algorithm. The second part indicates that loser particles learn from winner particles through DE to update instead of being guided by the optimal solutions in the population. In addition, because DE can guide the loser particles to update, it can further ensure the performance of the algorithm and get better ontology metamatching results.

4.3. The Pseudocode of Multiobjective Particle Swarm Optimization Algorithm Based on Diversity Enhancing Strategy

Given the source and target ontologies and , number of iteration , population size , particle’s current position , particle’s current velocity , elite particle set , current generation , particle’s fitness value and , the problem dimension , and the winner particle and the loser particle , the pseudocode of MOPSO-DE is presented in Algorithm 1. MOPSO-DE first initializes the velocity and position of the particles and calculates the two objective function values of the particles, e.g., and , as fitness value. Since the selection of elite particles should have good convergence and distribution, this paper combines nondominated sort with the calculation of the crowding distance of the solution from the fronts’ solution set to obtain the elite particle set . The calculation of crowding distance requires sorting all solutions in the population in descending order. Specifically, the crowding distance of the first and last solution is set to a maximum. The crowding distance of the -th solution is the product of the absolute value of the objective function of the -th and -th solution. Then, the recall and precision values of particles and are compared according to the pairwise competition strategy. If both recall and precision of particle are greater than that of particle , then is the winner particle. Among them, the winner particle is defined as , and it guides the update of the particle .

Input:
 two ontologies and , number of iteration , population size ;
 Particle’s current position , Particle’s current velocity , elite particle set ;
Output:
Winner particle ;
Initialization:
1 initialize generation ;
2 calculate particle’s fitness value and
3 for (; ; ++).
4 =random(0, 1);
5 =random(0, 1);
6 end for.
7 NonDominatedSort()
8 calculateCrowdingDistance();
9 sortFronts();
10 get elite particle set ;
Evolution
11 get elite particle set ;
12  whiledo
13  randomly select two particles and from the elite set ;
14   and of particles and are calculated, respectively;
15  [, ] = compete(, );
16  if and then
17   ;
18  else
19   ;
20  end if.
21  Update:
22   update velocity according to formula (12);
23   update position according to formula (13);
24  update particle’s fitness value and ;
25   = +1;
26 end while
27 return .

5. Experiment

5.1. Experimental Configurations

In order to verify the effectiveness of MOPSO-DE, the well-known benchmark provided by the Ontology Alignment Evaluation Initiative (OAEI) http://oaei.ontologymatching.org/ is used. Each dataset of benchmark consists of two ontologies and reference alignments, which are used to evaluate the performance of the different ontology matching systems. The reference alignments are provided by OAEI’s benchmark to evaluate the quality of alignments, and Table 1 briefly describes the benchmarks for OAEI.

In the experiment, MOPSO-DE is compared with PSO-based matching technique [29], MOPSO-based matching technique [13], and OAEI’s participants, i.e., edna [39], AgrMaker [40], AROMA [41], ASMOV [42], CODI [43], Eff2Match [44], Falcon [45], GeRMeSMB [46], MapPSO [47], RiMOM [48], SOBOM [49], and TaxoMap [50]. The experimental results of MOPSO-DE, PSO, and MOPSO in this paper are obtained from the mean values of 30 independent runs. The experimental results are presented by recall, precision, and f-measure. Table 2 shows the alignment quality of recall-driven and precision-driven PSO and MOPSO as well as MOPSO-DE proposed in this paper running independently for 30 times. and represent recall-driven and precision-driven in the classic PSO, respectively. Tables 3 and 4 show the comparison of mean and standard deviation in terms of recall and precision among recall-driven and precision-driven PSO, MOPSO, and MOPSO-DE. Tables 5 and 6 show the statistical analysis of -test based on Tables 3 and 4, respectively. In terms of the memory consumption, MOPSO-DE is compared with MOPSO in Figure 6. In addition, recall and precision are averaged on 30 independent runs, and f-measure is calculated from them. Table 7 shows the comparison of MOPSO-DE with OAEI’s participants in terms of the average matching results’ quality. We choose f-measure as alignment’s metric to trade off recall and precision, since f-measure comprehensively takes recall and precision into account.

The configurations of MOPSO and MOPSO-DE are as follows: (i)The size of population: 20(ii)The number of iterations: 200(iii)Learning factor: , (iv)String-based similarity measure: N-gram(v)Linguistic-based similarity measure: Wu and Palmer method(vi)Structure-based similarity measure: out-in degree

Since the ontology metamatching problem is a relatively small-scale problem, the population size is set as 20. In particular, the configurations of PSO and MOPSO are determined according to the corresponding literatures [13, 29] to guarantee the alignment’s quality.

5.2. Experimental Results

As can be seen from Table 2, since the ontology metamatching based on MOPSO-DE optimize two conflicting objectives simultaneously, i.e., recall and precision, better f-measure can be obtained compared with the classic PSO driven by only one objective. The single-objective PSO optimizes the quality of alignment by improving recall or precision, which leads to the sacrifice of another objective. MOPSO can better balance two objectives and achieve better results. In addition, thanks to the diversity enhancing strategy, the solutions can fast converge to the Pareto fronts and the distribution of Pareto fronts’ solutions are evenly distributed to help the algorithm jump from local optimal solutions. Therefore, the proposed MOPSO-DE has better convergence and distribution than MOPSO; thus, better solutions can be found. It can be found that the f-measure obtained by MOPSO-DE is superior to PSO and MOPSO.

This work uses the -test to measure the differences between different matching systems. The value is the absolute value. The larger the value, the more significant the performance difference between the two systems. Since the experiments are running 30 times independently on each testing case, the values are compared with 2.045. Tables 3 and 4 present the mean and standard deviation of recall and precision among recall-driven and precision-driven PSO, MOPSO, and MOPSO-DE in benchmark track. In round parenthesis, there are the standard deviation values. As can be seen from Tables 3 and 4, the mean of recall and precision of MOPSO-DE is higher and the standard deviation is lower, which indicates the efficiency and stability of the MOPSO-DE. In addition, as can be seen from Tables 5 and 6, all values are greater than 2.045, which indicate that MOPSO-DE is significantly different from the classical PSO and MOPSO in terms of performance.

As shown in Table 7, MOPSO-DE achieves the best f-measure in the testing cases 1XX and 3XX, which shows that the multiobjective evolution mechanism can find better alignments than other ontology matching systems. In addition, the introduction of the diversity enhancing strategy can better balance the diversity and convergence to significantly improve alignment’s quality. MOPSO-DE does not need subjective threshold given by experts in advance, which has strong robustness for various matching problems. About the testing cases 202, 209, 248, 249, and 250, our method does not require additional reasoning and repairing process, and less similarity measures are taken into consideration, which is a reasonable method for small-scale matching problems and guarantees the alignment’s quality. MOPSO-DE achieves the good but not optimal f-measure than other ontology matching systems such as AgrMaker and ASMOV. While the results of other systems such as edna, AROMA, CODI, Falcon, MapPSO, and TaxoMap almost dropped or close to 0 in testing cases 202, 209, 248, 249, and 250. The reason for this is that heterogeneous characteristic is a complex problem with more local optimal solutions. For the testing cases 202 and 209, two ontologies under alignment with different lexical and linguistic characters only have the same structural characters, which makes the matching process difficult. For the testing cases 248, 249, and 250, two ontologies under alignment are highly heterogeneous, which makes alignment’s quality relatively low. MOPSO-DE can achieve satisfactory results compared with the SI algorithms such as MapPSO in these cases due to the multiobjective evolution and the diversity enhancing strategy. MapPSO is a PSO-based approach, whose weights are manually set, making it easier to be trapped into local optima. However, MOPSO-DE combines multiobjective PSO and DE to avoid local optima and achieve high-quality alignments.

Figure 6 compares MOPSO-DE with MOPSO on the memory consumption. The introduction of the diversity enhancing strategy updates the particles by pairwise particles at each generation instead of the local and global best particles in the archive. It can be seen that without the external archive in storing particles’ information, MOPSO-DE can significantly reduce the memory consumption, which indicates that the diversity enhancing strategy can effectively reduce the computational cost. To sum up, MOPSO-DE is capable of efficiently solving the ontology metamatching problem.

6. Conclusions

This paper is devoted to solving the ontology metamatching problem. The cooperation between intelligent application systems needs to share the semantic information of data, and the term description of different fields leads to semantic heterogeneity. The heterogeneity issue hampers the communications among different ontologies and stops the cooperations among ontology-based intelligent applications. In order to build a semantic bridge between different application systems, ontology is needed to describe semantic relations. To solve this problem, it is necessary to find the mappings between ontology entities, that is, to perform the ontology metamatching process. The ontology metamatching problem is firstly defined as a multiobjective optimization problem in this work, and a MOPSO-DE is proposed to solve it. MOPSO-DE uses a diversity enhancing strategy to efficiently improve the alignment’s quality. Diversity enhancing strategy enables the solutions to converge quickly to the Pareto fronts and to be uniformly distributed on the real PF. The experiment uses OAEI’s benchmark to test MOPSO-DE’s performance. Experimental results prove that MOPSO-DE can obtain the perfect alignment and reduce the MOPSO’s memory consumption.

However, according to the experimental results, the alignment’s quality of MOPSO-DE is not good when dealing with relatively complex datasets, such as 202, 209, 248,249, and 250. Therefore, we will further improve the diversity enhancing strategy later. The performance of MOPSO-DE in this paper is good when dealing with problems with small dimensions, so it can be considered to apply MOPSO-DE to solve large-scale problems in the future work.

Data Availability

The data used to support this study can be found at http://oaei.ontologymatching.org.

Conflicts of Interest

The author declares that there are no conflicts of interest.

Acknowledgments

This work is supported by the Natural Science Foundation of Fujian Province (No. 2022J01644 and 2020J01875), the National Natural Science Foundation of China (No. 62172095), the Fujian Province Young and Middle-Aged Teachers Education Research Project (No. JAT210647, JAT210651), the Industry-School Cooperative Education Program of Ministry of Education (No. 202102100004), and the Research Innovation Team of Concord University College Fujian Normal University in 2020 (No.2020-TD-001).