TY - JOUR A2 - Yau, Wei-Chuen AU - Li, Wen-Ting AU - Gao, Shang-Bing AU - Zhang, Jun-Qiang AU - Guo, Shu-Xing PY - 2021 DA - 2021/12/13 TI - Training Method and Device of Chemical Industry Chinese Language Model Based on Knowledge Distillation SP - 5753693 VL - 2021 AB - Recent advances in pretraining language models have obtained state-of-the-art results in various natural language processing tasks. However, these huge pretraining language models are difficult to be used in practical applications, such as mobile devices and embedded devices. Moreover, there is no pretraining language model for the chemical industry. In this work, we propose a method to pretrain a smaller language representation model of the chemical industry domain. First, a huge number of chemical industry texts are used as pretraining corpus, and nontraditional knowledge distillation technology is used to build a simplified model to learn the knowledge in the BERT model. By learning the embedded layer, the middle layer, and the prediction layer at different stages, the simplified model not only learns the probability distribution of the prediction layer but also learns the embedded layer and the middle layer at the same time, to acquire the learning ability of BERT model. Finally, it is applied to the downstream tasks. Experiments show that, compared with the current BERT model distillation method, our method makes full use of the rich feature knowledge in the middle layer of the teacher model while building a student model based on the BiLSTM architecture, which effectively solves the problem that the traditional student model based on the transformer architecture is too large and improves the accuracy of the language model in the chemical domain. SN - 1058-9244 UR - https://doi.org/10.1155/2021/5753693 DO - 10.1155/2021/5753693 JF - Scientific Programming PB - Hindawi KW - ER -