Review Article

A Comprehensive Survey of Abstractive Text Summarization Based on Deep Learning

Table 4

The results of different models on the Gigaword dataset. RG-1 denotes the ROUGE-1 score, RG-2 denotes ROUGE-2 score, and RG-L denotes ROUGE-L score.

YearMethodGigawordVocabulary
RG-1RG-2RG-LIn/out

2017SEASS [119]36.1517.5433.63120k/69k
DRGD [120]36.2717.5733.62110k/69k
FTSumg [100]37.2717.6534.24120k/69k
Transformer [121]37.5718.9034.69120k/69k
2018Struct + 2Way + Word [122]35.4717.6633.5270k/10k
PG + EntailGen + QuestionGen [123]35.9817.7633.63110k/69k
CGU [124]36.318.033.8110k/69k
Reinforced-topic-ConvS2S [85]36.9218.2934.58110k/69k
Seq2seq + E2T_cnn [125]37.0416.6634.9350k/50k
Re^3 sum [126]37.0419.0334.46110k/69k
2019JointParsing [127]36.6118.8534.33110k/69k
Concept pointer + DS [128]37.0117.1034.87150k/150k
MASS [129]38.7319.7135.96110k/69k
UniLM [130]38.9020.0536.0030k/30k
BiSET [131]39.1119.7836.87110k/69k
PEGASUS [132]39.1219.8636.2496k/96k
2020ERNIE-GENBASE [133]38.8320.0436.2050k/50k
ERNIE-GENLARGE [133]39.2520.2536.5350k/50k
ProphetNet [134]39.5120.4236.69110k/69k
BART-RXF [135]40.4520.6936.56120k/69k
2021Mask attention network [136]38.2819.4635.46110k/69k
Transformer + Wdrop [137]39.6620.4536.5932k/32k
Transformer + Rep [137]39.8120.4036.9332k/32k
MUPPET BART large [138]40.420.5436.21120k/69k

The values in bold represent the SOTA model for that year.