Abstract

With respect to the fuzzy boundaries of military heterogeneous entities, this paper improves the entity annotation mechanism for entity with fuzzy boundaries based on related research works. This paper applies a BERT-BiLSTM-CRF model fusing deep learning and machine learning to recognize military entities, and thus, we can construct a smart military knowledge base with these entities. Furthermore, we can explore many military AI applications with the knowledge base and military Internet of Things (MIoT). To verify the performance of the model, we design multiple types of experiments. Experimental results show that the recognition performance of the model keeps improving with the increasing size of the corpus in the multidata source scenario, with the -score increasing from 73.56% to 84.53%. Experimental results of cross-corpus cross-validation show that the more types of entities covered in the training corpus and the richer the representation type, the stronger the generalization ability of the trained model, in which the recall rate of the model trained with the novel random type corpus reaches 74.33% and the -score reaches 76.98%. The results of the multimodel comparison experiments show that the BERT-BiLSTM-CRF model applied in this paper performs well for the recognition of military entities. The longitudinal comparison experimental results show that the -score of the BERT-BiLSTM-CRF model is 18.72%, 11.24%, 9.24%, and 5.07% higher than the four models CRF, LSTM-CRF, BiLSTM-CR, and BERT-CRF, respectively. The cross-sectional comparison experimental results show that the -score of the BERT-BiLSTM-CRF model improved by 6.63%, 7.95%, 3.72%, and 1.81% compared to the Lattice-LSTM-CRF, CNN-BiLSTM-CRF, BERT-BiGRU-CRF, and BERT-IDCNN-CRF models, respectively.

1. Introduction

The US military attaches great importance to the unified management of battlefield resources. In recent years, the US Department of Defense Advanced Research Projects Agency (DAPRA) has issued a number of basic research topics on the unified management of battlefield resources. The topic guides focus on the importance of building the expert knowledge base and knowledge graph for the battlefield resources [1]. They point out the research route and direction of the unified management of battlefield resources for the US military in the future. Relevant guidelines also point out that it is necessary to focus on expanding the coverage of military entity data and improving the quantity and quality of the knowledge base of military entities by combining data from different data sources like the existing data sets of the US Army, military canonical books, reliable professional websites, and military blogs. In terms of specific applications, the US military has achieved many practical applications; the most famous case is that Palantir applies the intelligence analysis technology and knowledge graphs to assist the US Intelligence Agency in capturing Osama bin Laden and uncovering the Ponzi scheme. Currently, Palantir is working with DAPRA to conduct in-depth research on the application of knowledge graphs and intelligence analysis technology to assist in intelligence gathering, processing, and military resource control. In the area of unified resource management, the PLA Academy of Military Sciences, in conjunction with the Department of Equipment Development, has carried out a number of studies on the management of military resources. In recent years, they have published “Distributed Resource Management Methodology,” “Research on End Resource Management Technology,” and “Research on Resource Management Technology Based on Edge Computing,” and other related topics. With these recognized military entities, many military apps about military management could be constructed, such as military resource planning, military resource scheduling, and military resource monitoring, according to the topics issued by the DAPRA and the PLA Academy of Military Sciences. Furthermore, the military resource posture could be drawn in real time with the development of the MIoT technology.

Inspired by the topic guidelines of the US military and the PLA Academy of Military Sciences, we perform battlefield resource entities extraction works from multisource heterogeneous data. The entities, we mainly concern, are computing and storage, battlefield perception, communication networks, weapon platform, and logistics support, as well as integrated mission environment information, combatant, combat agency, combat time, combat location, and event information involved in combat resources. These military entities can be virtualized, and then, various virtual military resource services and military IoT applications can be built in cloud services [2]. With these cloud services, military staffs can carry out studies on the virtual scheduling of military resources, thus improving the utilization of military resources.

Military entity recognition technology is based on general entity recognition technology, which has experienced the development process from dictionary, rule, and machine learning to deep learning [3]. At present, the recognition model based on pretraining language model and deep learning algorithm is the main stream of entity recognition in military and general fields with the increasing computing power of small computers, the maturity of deep learning technology, and the development of pretraining language model [4]. However, it is a challenge work for the military entity recognition with the distinct domain characteristics of military field. Firstly, it is lack of solid and reliable corpus for military entity recognition [5]. Since the data stored in military information systems can only be accessed by the staff with special rights, and fewer AI researchers in military field, since that the quantity and quality of the data sets of military fields cannot be compared with open domain. Secondly, there are few types of corpuses for military entity recognition. Military entities have distinct domain characteristics that multiple terms, multiple abbreviations, multiple types of expressions, multiple nested expressions, and multiple fresh words [3]. As the few types of corpus and data sources, a few characteristics of military entities could be covered with these corpuses, and thus, the military entity recognition on cross data source could not be conducted. Last but not least, it is lack of unified criteria for entity division. A nested entity could be divided into different granularity with different worker, so we need a unified criteria to guide the practice for military entity recognition.

To solve the problems discussed above, three types, abbreviated, scientific or English name, and novel and casual, of military corpuses are constructed, and then, we can explore different research work on military entity recognition with them, such as crossing data source entity recognition. We improve the entity labeling mechanism for entity with fuzzy boundary based on the military entity recognition works conducted by other researchers. With the constructed corpuses, three kinds of experiments have been carried out, the experiments of applying the BERT-BiLSTM-CRF model to military entity recognition for different size corpus sets, the experiments of applying the BERT-BiLSTM-CRF model to military entity recognition across corpus sets, and the experiments of comparing the performance of multimodel military entity recognition. The experimental results show that, the model used in this paper outperforms the listed models, such as CRF, LSTM-CRF, Lattice-LSTM-CRF, and other models, for military entity recognition. Furthermore, the generalization ability of the model is verified with these experiments; the experimental results provide a reliable reference for researchers to use BERT-BiLSTM-CRF model for military entity recognition.

Military entity recognition has attracted much attention, and many research works have been conducted. Researchers mainly applied machine learning combined with dictionaries or rules for military recognition at earlier times.

Jiang et al. [6] have proposed a model combined CRF with rules to extract military entities from combat instruments for automated generation of combat orders, which combines external lexical features, grammatical rules of military expressions, and the rule learning capability of CRF. The model achieves an -value of 75.48% on the corpus constructed by the authors with 300 combat instruments. Feng et al. [7] have proposed a semisupervised military entity recognition method based on CRF for identifying entity military information such as military ranks, military equipment, military facilities, and military institutions. The proposed method takes use of the basic features of military texts to construct a grammatical feature set of military texts and fuses them with the CRF model. Experimental had been carried out on the corpus consisting of combat documents, duty documents, military documents, military online news, military blogs, and military reviews with this model, and the results show that the highest -value of entity extraction was 90.9%. Shan et al. [8] have proposed a CRF-based military entity recognition method under a small granularity strategy for the military named entities with complex internal nested relationships and inconspicuous grammatical distinctions. This model applies a small granularity strategy, combines it with a CRF model to identify small nondivisible military entities, and finally, the small granular entities are integrated to obtain complete military entities. Experimental carried out on a manually constructed annotated corpus of combat instruments, and the model achieves an -value of 78% on the corpus.

With the continuous development of deep learning techniques and the increasing computing power of minicomputers, applications of deep learning in the field of military entity recognition are emerging. In [9], a recognition method based on deep neural network models has been studied; the method applies the word vectors and word states as features for weapon name recognition and achieves an -value of 91.02% on a corpus set constructed with military website data. Liu et al. [10] apply a BiLSTM-CRF model to identify weaponry equipment names, and this achieves an -value of 93.88% on the corpus constructed from military documents. Wang et al. [11] propose a character+BiLSTM+CRF model to extract military entity from military corpus, which aims to solve the problem that the complexity of artificial construction features and the inaccuracy of military text segmentation in the traditional methods of military named entity recognition; the experimental results show that the model proposed in this paper outperforms the traditional methods. Other applications for the military entity recognition have been studied, which are based on deep learning, such as recognition models based on a combination of a self-attentive mechanism and BiLSTM-CRF model [12], military entity recognition models based on multineural network collaboration [13], and military entity recognition models based on transfer representation learning [14]. Where the multineural network-based model applies word vectors obtained from BERT pretraining as input and combines with BiLSTM-CRF model for military entity recognition, which achieves an -value of 84.07% on the corpus constructed based on military websites and military blogs. However, the authors did not consider the sentence contextual features in the training process, which easily led to contextual coreference that could not be effectively processed. Single data source was used, and the scenario of multiple corpus intersection was not considered in the work, which could not verify the generalization ability of the mode. Furthermore, the entity type coverage was relatively small, and the effect of corpus set size on the experimental results and the effect of heterogeneous data on the experimental results were not illustrated in the experimental session. At present, entity recognition based on pretrained models and attention mechanism in the field of generic entity recognition is the mainstream [1520], which gives important insight into the direction of entity recognition technology development in the military field.

Recognition models based on lexicons, rules, and machine learning are traditional methods, which rely on powerful feature engineering and cost a lot for large-scale applications. Entity recognition models based on pretraining and deep learning do not need to rely on the support of basic feature engineering, and they are the main research direction for future military entity recognition. Some research results for named entity recognition have been achieved with these methods, but there are also many problems, such as no standard corpus to measure the merits of the models, no experimental tests across scenes and corpus, and no comprehensive consideration of all information in the context.

3. Construction of Multisource Heterogeneous Military Corpus

3.1. Analysis of Data Sources and Data Characteristics

Multisource heterogeneous data has the characteristics of wide fields, large span, and rich information. We can take advantage of this and then construct a relatively complete set of military entities for the research work of military entity recognition. There are two kinds of military text data; one is the nonpublic data, such as combat documents, military documents, military documents, reconnaissance intelligence, military teaching plans, simulation training task scenarios, and simulation logs; the other one is the open-source data, such as military blogs, military news, well-known military websites (such as Phoenix military, Jane’s Defense Weekly, and Hanhe defense), and arms dealer websites. These data can be divided into three categories, according to the specific forms of military entity data: (1) abbreviation type, it is commonly used in military combat documents, such as double 35, which represents double barrel 35 mm artillery; (2) scientific or English name type, many military entities recorded as the forms of scientific or English name in military books, such as 7.62 mm sniper rifle, F-35, and Su-30; (3) novel and casual type, the expression of military entities in network terms is relatively casual, and with many fresh and cool words are used, such as the network terms about j-20, including “Wei Long,” “J-00,” and “door-to-door.” In addition, the entities in these corpuses have the characteristics of fuzzy boundary and multinesting.

3.2. Data Preprocessing and Corpus Construction

In this work, we mainly select four kinds of nonpublic data and three kinds of open-source data as the main sources of the experimental corpus. Nonpublic data contains representative military documents, reconnaissance intelligence, simulation training mission scenario, and military books; open-source data contains military blogs, military reviews, and well-known military websites. The data in the selected data sources covers the three types of data discussed above, which can support the experimental needs for cross data scenarios. Most of the nonpublic data is text type, which needs to be electronic, and then, the original text is segmented according to the punctuation marks of “,”, “.”, “!”, “;”, and “?” and serialized into data in CSV or TSV format.

We apply crawler to access web data from military blogs, military reviews, and well-known military websites, then extract the text information by text density, and finally, the data is serialized by the proposed method above. The serialized corpus is the original unmarked corpus. We label the original corpus in a word level way [13]. In this way, three kinds of military entity recognition corpus sets, abbreviated, scientific or English name, and novel and casual, three military Entity Recognition Corpus sets, are obtained, covering 12 categories of entity objects including personnel name, military place name, time, military event, military institution, military facility, combat environment, computing and storage, battlefield perception, communication network, weapon platform, and logistics support. The size information of these corpus is shown in Table 1, and the entity information of them is shown in Table 2.

3.3. Data Labeling

There are various forms of military entities in multisource heterogeneous data; in [13], five rules have been proposed to solve the problem for recognizing military entities with fuzzy boundaries, but only partial cases can be solved for the problem with these rules. With this problem, we propose some rules considering abbreviations and standardized expressions for the other cases.

Rule 1: numbers are connected with weapons or equipment (or place names, military institution names), and they can be labeled as weapon or equipment entities (place name entities, military institution entities), such as 1130 anti-aircraft gun, 591 highlands, and the 38th army

Rule 2: numbers and length units are connected with the weapon entity, and they can be marked as the weapon entity, such as a 7.62 mm sniper gun

Rule 3: adjectives are connected with weapon entities, and we label them as weapon entities, such as long endurance UAV

Rule 4: the abbreviations of English or Chinese characters of weapon entities are connected with numbers, and they can be marked as weapon entities, such as J-20 and J16

Rule 5: personal names are connected with military institutions, and they can be marked as military institutional entities, such as the “Yang Gensi company”

With these rules proposed in this work and in [13], we can solve the problem that determining the entity boundary in heterogeneous corpus.

Named entities in the military field are characterized by multiple types, multiple professional terms, and less ambiguity [10]. The BIO labeling method is suitable for labeling entities with these characteristics, which is a concise and efficient labeling mechanism. The specific labeling scheme is shown in Table 3.

4. Military Entity Recognition Model

In this work, we apply the BERT-BiLSTM-CRF model to recognize battlefield resource entity recognition from military text. This model uses the word vectors obtained by BERT pretraining as input information and integrates bidirectional LSTM (Long Short-term Memory) and CRF to identify entities from the input information. The model is divided into BERT pretraining language model layer, BiLSTM layer, and the layer, and its structure is illustrated in Figure 1.

Let denote the military corpus sets, thus , where denotes the th corpus set, , denotes the sentence in corpus , , denotes the th word in sentence . At the beginning, the input unit transforms the to , and then, is transformed to by the transformer encoders; next, goes through the forward LSTM units, , and backward LSMT units, ; then, we get a feature matrix ; at the end, the CRF layer captures the dependencies between adjacent labels according to feature vectors and outputs corresponding labels .

4.1. BERT Pretraining Model

Word vectors applied to military entity recognition can be trained from military corpus with the pretraining model of BERT. This model uses bidirectional transformer network structure to learn semantic feature information of military text context. In particular, two kinds of unsupervised pretraining tasks are developed, which inspired by the idea of cloze filling, to learn more context information from the text during the training process. One is Masked Language Model (MLM), and the other one is Next Sentence Predict (NSP), which are introduced to overcome the unidirectional problem that meets by most word vector generation models. As a result, more fully information can be extracted from the military corpus by the BERT pretraining language model which applies the organic combination of the transformer encoder structure and the unsupervised training task.

4.1.1. Input Unit

The input unit of the BERT model consists of Word Embedding, Segment Encoding, and Positional Encoding of the input sequences. Word features, sentence features, and position features of each word in the input sentence should be calculated before they are transformed into the BERT layer. Where the word features are denoted as , sentence features are denoted as , and position features are denoted as . The word feature of the word is provided by the corresponding word vectors in the vocabulary trained by Google. denotes segment number of the input sequences; takes either 0 or 1. We apply absolute position mode for the position feature; that is, , where denotes the position feature of the word at the th input position.

The word feature, sentence feature, and position feature of the corresponding position in the input sequence are added together to obtain , where , and . The input feature composition and calculation method are illustrated in Figure 2.

4.1.2. Transformer Encoder Unit

The BERT model consists of a bidirectional transformer encoder network (encoder structure is shown in Figure 3); the structure is illustrated in Figure 4. It takes sequence of as input, and then, the multilayer transformer encoding unit pretrains the sequence into dynamic word vectors.

The encoder unit includes self-attention network unit, feedforward neural network unit, and basic normalized network unit. The self-attention network unit is used to learn the features of input sequences. The application of the self-attention mechanism helps the BERT to get rid of the problem of long-term dependence on recurrent neural nets, and thus, it can be used to perform parallel computing [21].

The principle, the self-attention unit used for learning sentence features, is that it enables each word in a sequence to perform attention operations on each other to capture the input features. The formula for attention calculation is given in equation (1).

In which, is the query vector, is the key vector, is the value vector, and is the input vector dimension.

It is known that the ability is limited that single attention unit learns input features, so the multihead attention mechanism is applied by the transformer encoder unit. Its working principle is to perform different linear mappings of , , and and then calculate their attention values, respectively, and then fuse the attention information obtained. Equations (2) and (3) are the calculation formulas of multiple attention.

Word order of input sequences is not considered for the self-attention unit, and the positional coding unit helps to solve this problem. At time , the sum of Word Embedding, Positional Encoding, and Segment Encoding is the actual input for the BERT model, which has been discussed in the input unit. We can benefit from the input that the relative position and segment encoding added to ensure that the actual input word vector is different when the same word vector appears in different positions in different sequences.

4.1.3. Unsupervised Training Tasks

Inspired by clotting, the BERT model applies two unsupervised training tasks in the pretraining stage; one is the Masked Language Model, and the other one is the Next Sentence Predict.

In the Masked Language Model task, the model will randomly “remove” 15% of the words in the input sequences and then make the model actively learn the contextual semantic relations of the input sequences from different directions. Through iterative training, the probability of reasoning to get the correct answer is as large as possible, so as to achieve the purpose of learning the text semantics. In the task of Next Sentence Predict, the model randomly selects sentence pairs from the training text, in which the positive and negative samples account for 50%, respectively. Then, the training task is carried out on the training set of sentence pairs, and the BERT model is allowed to judge their correlation, so as to learn the relations between two sentences.

4.2. BiLSTM Layer

The BERT pretraining model provides dynamic word vectors for the whole recognition system, but it is slightly insufficient in learning sentence features. Therefore, the BiLSTM layer, illustrated in Figure 2, is introduced to model sentences and learn the features of different input sequences. The input vector which is the output of the BERT model goes through the forward LSTM layer , and the backward LSTM layer; then, it is transformed to a matrix , where denotes the probability of the output label that corresponds to the input word feature .

The LSTM model, illustrated in Figure 5, adds “forgetting gate ,” “input gate ,” “output gate ,” and “cell state ” to the structure of the RNN (recurrent neural networks) model, so it is known as improved RNN model. With the added units, the problems that long distance dependence and gradient disappearance of the recurrent neural network could be solved. Meanwhile, the units can adjust the memory function of the network help, which helps to maintain and update the state of the whole network.

The calculation formulas corresponding to the LSTM network units are shown as follows.

In which, is sigmoid function, is input vector, is output vector, is parameter matrix, and is offset parameter.

4.3. CRF Layer

Dependency between tags is a common sense in the input sequence; here, we take the BIO labeling system to illustrate this sense. The starting label of each word in the input sequence is “B-” or “O,” and it is usually that “I-X” follows “B-X,” and “I-X” is used as the ending label of the word. However, “I-” cannot be used as the starting label. For example, a legal annotation sequence is “B-L I-L I-L,” which together represents a location information. Illegal labels such as “B-G I-L” may appear, if the labeling process is not controlled. Unfortunately, the BiLSTM layer focuses on the context information and sentence features of the input sequence and cannot learn these annotation rules.

The CRF layer takes the output characteristic matrix of BiLSTM layer as input and outputs the global optimal label sequence, that is , the most possible sequence annotation. The CRF layer transforms the dependency information between tags into constraints when predicting tags, so as to ensure the accuracy of prediction. Such as the “I-L” label cannot appear after the “B-W” label, the dependency constraint relationship between the labels will be automatically learned by the CRF layer in the data training stage. The label constraint relationship is represented by the transfer matrix , where represents the dependence intensity between the th label and the th label. The higher the score, the greater the intensity and vice versa. In the actual prediction process, the start state and end state will be added into the input sequence. Therefore, the actual matrix is . In a tag sequence with a length equals to the length of the input sequence, the model scores the tag of the input sequence , and this is calculated as follows:

After the score is calculated, and then the normalized probability is calculated with softmax unit:

Let denotes the set of all labels, where , the denominator of equation (6) denotes that all possible transfer scores are obtained. Take logarithms on both sides of equation (6) to obtain that the log likelihood with respect to the input sequence .

In the training process, the set of maximum probability labels in the sequence is selected by obtaining the value of the maximum likelihood function, that is, the annotation sequence for the input sequence predicted by CRF layer:

5. Experiments

We have conducted extensive literature research, and then, we find that few authors have considered the heterogeneous characteristics of military texts; thus, they have ignored the performance of these characteristics on the precision of deep learning model. Therefore, we have carried out relevant studies and conduct extensive experiments on related aspects. Experiments and discussions are shown as follows.

5.1. The Evaluation Metrics

In this work, precision (), recall (), and -score () are selected as metrics for evaluating the performance of military entity extraction. The metrics are defined as follows:

In which, is the number of positive entities identified by models, is the number of negative entities identified by models, and is the number of effective entities not detected by models.

5.2. Experimental Parameters

We carry out experimental with the BERT-Base model provided by an open-source project of Google. The information of hyperparameters in the training process is listed in Table 4. The deep learning framework and basic runtime environment are Pytorch V1.6.0 and Python V3.6.2. Experimental hardware configuration: 256G memory, 4 Nvidia GeForce RTX 3070 GPU.

5.3. Experimental Design and Experimental Result Analysis
5.3.1. Experiments on Corpus of Different Sizes

The three types of corpuses are divided into three data subsets of similar size by stratified sampling method, and experiments are carried out on them.

Experiment one (EP-ONE), one subset is extracted from each kind of corpus set to form a new corpus set, resulting in a total of 27 groups of experimental corpus sets. For each new corpus set, it is divided into five groups of similar size by the stratified sampling method, and then, using the cross-validation mechanism to carry out experimental, that four groups are selected as the training group, and the remaining one is used as the test group. With the cross-validation mechanism, five experiments are carried out on each new corpus set, and a total of 135 experiments are conducted. Finally, the mean of all experiments is taken as the test result.

Experiment two (EP-TWO), two subsets are extracted each time to form a new corpus set, resulting in 27 groups of experimental corpus sets. The following process is consistent with the EP-ONE.

Experiment three (EP-THREE), all the data is selected, and the corpus is divided into five groups of similar size by stratified sampling method. Then, the cross-validation mechanism is used as it is used in EP-ONE. A total of five experiments are carried out, and finally, the mean of five experiments is taken as the test result. The experimental results are listed in Table 5.

The experimental results show that the performance of military entity recognition with BERT-BiLSTM-CRF model is constantly improving with the increase of the training corpus, where the -value of the EP-TWO is 6.6% higher than that of the EP-ONE, the -value of the EP-THREE is 10.97% higher than that of the EP-ONE, and the -value of the EP-THREE is 4.37% higher than that of the EP-TWO. To explain this situation, we analyze the data features of the corpus constructed in this work. The analysis result shows that the military entities distributed in the corpus have characteristics of multiple types and variety of representations. These characters make the distribution of the entities sparse, and the smaller the corpus set is the data sparsity is more obvious. Therefore, the evaluation metric values are low with a small training corpus set, such as the result of EP-ONE, of which the -value for military entity recognition is only 73.56%. However, with the data increase of the corpus set, the sparse problem is gradually solved. Such as the EP-THREE, the -value of military entity recognition reaches 83.53%. Therefore, in order to ensure the performance of military entity recognition, it is necessary to build a relatively large corpus for training the BERT-BiLSTM-CRF model.

5.3.2. Cross-Validation Experiments

Cross-validation experiments are conducted on the three types of corpus; the experiments are divided into two parts: Selecting any one of them as the training set and the other two as the testing sets and selecting any two of them as the training set and the other as the test set.

The experiments are conducted according to the above two strategies, and for the convenience of representing the results, the corpus types of the experiments are represented by numbers, where “1” represents the abbreviated corpus, “2” represents the scientific or English name corpus, and “3” represents the novel and casual corpus, and the experimental results are listed in Table 6.

The experimental results show that it is not robust that the generalization ability of the BERT-BiLSTM-CRF model proposed in this work, when it is used on different corpus sets, since the large difference in entity representation between different corpus sets. Such as when the model is trained on the abbreviated corpus and tested on the other two types of corpuses, the recall rate is only 49.78%, and the -value is 56.85%. However, when it was trained on the novel casual type corpus and tested on both the abbreviated and scientific or English name type corpus, it gets better performance; the recall rate and the -value are 74.33% and 76.98%, an increase of 24.55% and 20.13%, respectively, over the first group. We sample entities from different corpus and make a comparison on the distribution. Then, we find that the novel casual corpus contains a larger span of entity types and entity representation types, so when the model trained on it performs well. When cross-corpus training is conducted, the generalization ability of the model is significantly improved, with -values reaching 74.36%, 82.79%, and 87.39%, respectively. The purpose of this experiment is to illustrate that when it is necessary to extract entities across scenes and across corpus sets, one needs to pay attention to the entity features, and the distribution of entities in different scenes corpus sets and choose the training corpus set reasonably. The experimental results also show that when hardware conditions are limited, users could choose a corpus with more entity types as the training set to train the model and would achieve better performance.

5.3.3. Comparison Experiments

The comparison models are divided into horizontal and vertical two groups, where vertical group contains models, such as CRF, LSTM, and BiLSTM-CRF, and horizontal group contains models Lattice-LSTM-CRF, BERT-BiGRU-CRF, and BERT-IDCNN-CRF. Experiments are carried out on the whole corpus, and the experimental process is the same as the EP-THREE mentioned above. The models, for comparison, have achieved state of the art in the development of named entity recognition; thus, the experimental results would be more convincing when compared to these models.

In experimental process, the CRF model is trained and tested with the open source CRF++ (v0.58) tool. The LSTM-CRF model and BiLSTM-CRF model use word vectors trained with word2vec as input, and the vector dimension is 300 [22]. The experimental super parameters are consistent with the literature [13]. The Lattice-LSTM-CRF model adopts the super parameters in literature [23]. The input of CNN-BiLSTM-CRF model adopts word2vec word vectors as the input, the vector dimension is 300, and other super parameters are consistent with literature [24]. The experimental results are listed in Tables 7 and 8.

The experimental results show that the BERT-BiLSTM-CRF model applied in this work outperforms the other models listed above; it gets better performance for the recognition of military entities on the corpus. In the longitudinal comparison experiments, we compare the metric values of the BERT-BiLSTM-CRF model with any other model in the longitudinal group. Where compared to the CRF model, the recall rate and -value are increased by 23.76% and 18.72%, respectively; compared to the LSTM-CRF model, the recall rate and -value are increased by 11.65% and 11.24%, respectively; compared to the BiLSMT-CRF model, the recall rate and -value are increased by 8.94% and 9.24%, respectively; and compared to the BERT-CRF model, the recall rate and -value are increased by 5.05% and 5.07%, respectively. Word2vec is a static word vector, and its application in dynamic word sense transformation scenarios is poor for entity recognition, whereas dynamic word vectors can perform well in this scene. Therefore, the LSTM-CRF and BiLSTM-CRF entity recognition models are not as effective as the BERT-BiLSTM-CRF model with dynamic pretrained word vectors for entity recognition on a corpus with “five-plus” types. Although the BERT-CRF model introduces a pretraining mechanism, it is not as effective as the BERT-BiLSTM-CRF model with a bidirectional long and short-term memory network in learning the contextual features of sentences. In the transverse comparison experiment, the experimental process is the same as longitudinal experiment. And we analyze their performance as follows: compared to the Latch-LSTM-CRF model, the recall rate and -value are increased by 7.08% and 6.63%, respectively; compared to the CNN-BILSTM-CRF model, the recall rate and -value are increased by 7.81% and 7.95%, respectively; compared to the CNN-BILSTM-CRF model, the recall rate and -value are increased by 2.9% and 3.72%, respectively; and compared to the CNN-BILSTM-CRF model, the recall rate and -value are increased by 1.75% and 1.81%, respectively. Where the Lattice-LSTM-CRF model uses the improved grid LSTM element to integrate word and word order information, which can avoid word segmentation errors not transmitted in the network. The CNN-BILSTM-CRF model uses CNN to extract character-level features of words, which improves the vector representation ability of words and reduces the influence of segmentation errors. Although these two models have been improved in character and word feature extraction, they cannot fundamentally overcome the weakness of static word vector, and the performance improvement and application scenarios will be limited to some extent. The BERT model, which skillfully combines multiattentional mechanism and unsupervised subtask training task, can integrate characters, words, sentences, and word order to learn contextual information and then complete the comprehension task. The experimental results show that there is little difference between the models with pretraining structure, but BiLSTM performs well than IDCNN and BiGRU in the modeling of serialized text features, so the recognition performance of BERT-BilSTM-CRF model is better in the entity recognition task of military language data.

6. Conclusion

In this work, we construct a set of corpuses and propose five rules for identifying the fuzzy boundary of military entities, which are applied to solve problems for military entity recognition, such as the lack of corpus, the single type of corpus, and the disunity of entity boundary division. These corpus have been divided into three types, abbreviation type, scientific or English name type, and novel and casual type. With these corpuses, we have conducted three types of experiments: (1) military entity recognition experiments using BERT-BiLSTM-CRF model on different sizes of corpus sets, (2) military entity recognition experiments using BERT-BiLSTM-CRF model across corpus sets, and (3) comparison experiments of multimodel military entity recognition. The experimental results illustrate the effect of different size data sets on the accuracy of the entity recognition model and the effect of data distribution on the accuracy of the recognition model and also validate the effectiveness of the BERT-BiLSTM-CRF model for military entity recognition.

At present, due to the limitation of data access, the amount of nonopen data available is limited, and many military entities are not yet covered. In the future, more new data samples will be generated by learning from existing sample data and combining the descriptions of some data patterns by domain experts using adversarial neural networks, which will be used to enrich the existing corpus. In addition, methods such as migration learning, knowledge distillation, and unsupervised learning are considered to reduce the reliance on corpus size and accuracy and to build lightweight military entity recognition models. In terms of practical applications, military knowledge graph plays a critical role in promoting the development of military intelligence, and military intelligence applications based on military knowledge graph will become more popular. In the future, intelligent resource management and scheduling technology will be widely used in the field of unmanned combat and in the field of military chess deduction.

Data Availability

The simulation experiment data of military texts used to support the findings of this study are restricted by the School Security Office in order to protect Military Intelligence Information. Data is available from Hui Li, [email protected], for researchers who meet the criteria for access to confidential data.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The work by Ming Lyu is funded by the Natural Science Foundation of Jiangsu Province (Grant No. BK20180467).