Research Article
MTQA: Text-Based Multitype Question and Answer Reading Comprehension Model
Table 1
Optimal results and innovation of the models.
| Model design perspective | Model | Results | Innovations | EM | F1 |
| Operations on external methods | NAQANet | 46.20 | 49.24 | The first model built for DROP corpus, with four decoding heads | NABERT+ | 64.61 | 67.35 | Switch to BERT encoding based on NAQANet model | MTMSN | 76.68 | 80.54 | Add two decoding heads and cluster search algorithm based on NABERT+ | TbMS | 76.91 | 79.92 | Improve and increase the answer prediction algorithm | TASE | — | — |
| Operations on the model itself | QANet + ELMo | 27.71 | 30.33 | Put the DROP corpus into the model for training and testing | BERTBase | 30.10 | 33.36 | GENBERT | 68.20 | 72.80 | Use transformer internal structure for decoding and secondary pretraining |
|
|