Research Article

[Retracted] N-GPETS: Neural Attention Graph-Based Pretrained Statistical Model for Extractive Text Summarization

Table 4

BERT-based graph structures.

Author(s)Model nameBERT-based graph structure

Liu et al., 2019, [44]BERTSum (sent)It uses several separating tokens in documents and gets sequential sentence representations. It should be noted that this model was the first BERT-based model for extraction operations. We and many other functions use its framework as a document encoder.
Zhang et al., 2018, [36]HiBERTFirst, it works by transforming the BERT structure into a sequential structure and then using the unattended to train it in advance.
Xu et al., 2019, [46]DISCOBERTOne of the most modern abstraction models uses the BERT model to assemble sentences and update these sentence presentations with the help of a graph. It is clear that DISCOBERT only uses sentence beginning and endings. However, we use sentence verbs and additional semantic nodes in our work to construct a variety of different bipartite graphs.
Cui et al., 2020, [14]Topic-GraphSumIt uses a BERT model that creates a graph-based model of sentence coding and obtaining information on a hidden subject. This topic information acts as an additional semantic unit using a combined neural network (NTM) model.
Huang et al., 2021, [15]DiscoCorrelation-GraphSumDifferent graph formats were proposed and used three types of nodes: sentence locations, EDU locations, and business locations, and RST speech separation to capture interactions between EDUs and to use external speech information to improve model outcomes.
ProposedN-GPETSOur attention to a neural heterogeneous graph-based statistical model of pretrained pretraining builds strong relationships between sentences based on additional semantic keywords (sentence-word-sentence). Due to the classification of nodes, sentences are specifically selected to produce our proposed N-GPETS model.