It uses several separating tokens in documents and gets sequential sentence representations. It should be noted that this model was the first BERT-based model for extraction operations. We and many other functions use its framework as a document encoder.
One of the most modern abstraction models uses the BERT model to assemble sentences and update these sentence presentations with the help of a graph. It is clear that DISCOBERT only uses sentence beginning and endings. However, we use sentence verbs and additional semantic nodes in our work to construct a variety of different bipartite graphs.
It uses a BERT model that creates a graph-based model of sentence coding and obtaining information on a hidden subject. This topic information acts as an additional semantic unit using a combined neural network (NTM) model.
Different graph formats were proposed and used three types of nodes: sentence locations, EDU locations, and business locations, and RST speech separation to capture interactions between EDUs and to use external speech information to improve model outcomes.
Proposed
N-GPETS
Our attention to a neural heterogeneous graph-based statistical model of pretrained pretraining builds strong relationships between sentences based on additional semantic keywords (sentence-word-sentence). Due to the classification of nodes, sentences are specifically selected to produce our proposed N-GPETS model.