Research Article

MTQA: Text-Based Multitype Question and Answer Reading Comprehension Model

Table 5

Comparison with existing models.

ModelsPretrainingAccuracyEpochLearning rate
EMF1

QANet + ELMo [27]27.7130.33
BERT [28]Base30.1033.36
NAQANet [19]46.2049.24
MTMSN [21]Base68.1772.81
Large76.6880.54
TbMS [22]Base66.9170.55
Large_SQuAD76.9179.92
MTQApre_Electra_Base75.6779.9983e − 5
pre_Electra_Large81.2585.26125e −6
Human performance [19]92.3895.98