Research Article
MTQA: Text-Based Multitype Question and Answer Reading Comprehension Model
Table 5
Comparison with existing models.
| Models | Pretraining | Accuracy | Epoch | Learning rate | EM | F1 |
| QANet + ELMo [27] | — | 27.71 | 30.33 | — | — | BERT [28] | Base | 30.10 | 33.36 | — | — | NAQANet [19] | — | 46.20 | 49.24 | — | — | MTMSN [21] | Base | 68.17 | 72.81 | — | — | Large | 76.68 | 80.54 | — | — | TbMS [22] | Base | 66.91 | 70.55 | — | — | Large_SQuAD | 76.91 | 79.92 | — | — | MTQA | pre_Electra_Base | 75.67 | 79.99 | 8 | 3e − 5 | pre_Electra_Large | 81.25 | 85.26 | 12 | 5e − 6 | Human performance [19] | — | 92.38 | 95.98 | — | — |
|
|