Computational Intelligence and Neuroscience / 2019 / Article / Tab 3

Research Article

A Stacked BiLSTM Neural Network Based on Coattention Mechanism for Question Answering

Table 3

Experimental results of different baselines and our proposed model on Train-All data.

IdxModelMAPMRR

1Probabilistic quasi-synchronous grammar [35]0.60290.6852
2Tree edit models [2]0.60910.6917
3Linear-chain CRF [17]0.63070.7477
4LCLR [18]0.70920.7700
5Bigram + count [38]0.71130.7846
6Three-layer BiLSTM + BM25 [6]0.71340.7913
7Convolutional deep neural networks [39]0.74590.8078
8BiLSTM/CNN with attention [7]0.71110.8322
9Attentive LSTM [1]0.75300.8300
10BiLSTM encoder-decoder with step attention [8]0.72610.8018
11BiLSTM0.69820.7764
12Stacked BiLSTM0.71270.7893
13BiLSTM with coattention0.73250.7962
14Stacked BiLSTM with coattention0.74510.8114
15Stacked BiLSTM with coattention (cosine + Euclidean)0.76130.8401

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.