Table of Contents Author Guidelines Submit a Manuscript
Wireless Communications and Mobile Computing
Volume 2018, Article ID 2678976, 8 pages
https://doi.org/10.1155/2018/2678976
Research Article

Using Sentence-Level Neural Network Models for Multiple-Choice Reading Comprehension Tasks

1School of Computer and Information Technology, Shanxi University, Taiyuan 030006, China
2Key Laboratory of Computation Intelligence and Chinese Information Processing of Ministry of Education, Shanxi University, Taiyuan 030006, China
3School of Foreign Languages, Shanxi University, Taiyuan 030006, China

Correspondence should be addressed to Yuanlong Wang; nc.ude.uxs@gnawly

Received 28 March 2018; Accepted 13 June 2018; Published 3 July 2018

Academic Editor: Tianyong Hao

Copyright © 2018 Yuanlong Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. D. Chen, J. Bolton, and C. D. Manning, “A thorough examination of the CNN/daily mail reading comprehension task,” in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, pp. 2358–2367, August 2016. View at Scopus
  2. Y. Z. Liu, S. M. Sun, and L. Y. X. Kand Ruobing, “Knowledge representation learning: a review,” Journal of Computer Research and Development, vol. 53, no. 2, pp. 247–261, 2016. View at Google Scholar
  3. R. Matthew, J. C. Christopher, and R. Erin, “MCTest: a challenge dataset for the open-domain machine comprehension of text,” in Proceedings of the 2013 on Empirical Methods in Natural Language Processing, pp. 193–203, 2013.
  4. H. Felix, B. Antoine, C. Sumit, and W. Jason, “The goldilocks principle: reading children s books with explicit memory representations,” 2015, https://arxiv.org/abs/1511.02301.
  5. R. Pranav, Z. Jian, L. Konstantin, and L. Percy, “SQuAD: 100,000+ Questions for machine comprehension of text,” in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383–2392, 2016.
  6. J. Mandar, C. Eunsol, SW. Daniel, and Z. Luke, “TriviaQA: a large scale distantly supervised challenge dataset for reading comprehension,” in Proceeding of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 1601–1611, 2017.
  7. Y. M. Cui, T. Liu, and Z. P. Chen, “Consensus attention- based neural networks for Chinese reading comprehension,” 2016, https://arxiv.org/help/prep.
  8. Y. Cui, Z. Chen, S. Wei, S. Wang, T. Liu, and G. Hu, “Attention-over-attention neural networks for reading comprehension,” in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 593–602, 2017.
  9. M. H. Karl, K. Tomas, G. Edward et al., “Teaching machines to read and comprehend,” In Advances in Neural Information Processing Systems, pp. 1684–1692, 2015. View at Google Scholar
  10. J. M. Karl PRaymond, “Using sentence-level LSTM language models for script inference,” in Proceeding of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 279–289, 2016.
  11. S. Mrinmaya, D. Avinava, P. X. Eric, and R. Matthew, “Learning answer-entailing structures for machine comprehension,” in In Proceeding of the 53th Annual Meeting of the Association for Computational Linguistics, pp. 239–249, 2015.
  12. H. Wang, B. Mohit, G. Kevin, and A. M. David, “Machine comprehension with syntax, frames, and semantics,” in Proceedings of the 53th Annual Meeting of the Association for Computational Linguistics, pp. 700–706, 2015.
  13. T. Adam, Z. Ye, and Y. Xingdi, “A parallel-hierarchical model for machine comprehension on sparse,” in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 432–441, 2016.
  14. W. Yin, S. Ebert, and H. Schütze, “Attention-Based Convolutional Neural Network for Machine Comprehension,” in Proceedings of the Workshop on Human-Computer Question Answering, pp. 15–21, San Diego, California, June 2016. View at Publisher · View at Google Scholar
  15. K. Rudolf, S. Martin, B. Ondrej, and K. Jan, “Text understanding with the attention sum reader network,” 2016, https://arxiv.org/abs/1603.01547.
  16. D. Bhuwan, H. Liu X, and L. Yang Z, “Gated-attention readers for text comprehension,” in Proceeding of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 1832–1846, 2017.
  17. Y. L. Wang, “Sentence composition model for reading comprehension,” Journal of Computer Application, vol. 37, no. 6, pp. 1741–1746, 2017. View at Google Scholar
  18. D. Z. Matthew, “Adadelta: an adaptive learning rate method,” 2012, https://arxiv.org/abs/1212.5701.
  19. M. Tomas, C. Kai, C. Greg, and D. Jeffrey, “Efficient estimation of word representations in vector space,” in Proceedings of the In Proceedings of workshop at ICLR, pp. 1–12, 2013.
  20. S. Nitish, E. H. Geoffrey, K. Alex, S. Ilya, and S. Ruslan, “Dropout, a simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014. View at Google Scholar
  21. Theano Development Team, “Theano: a python framework for fast computation of mathematical expressions,” 2016, https://arxiv.org/abs/1605.02688.
  22. P. Baolin, L. Zhengdong, L. Hang, and W. Kanfai, “Toward neural network-based reasoning,” 2015, https://arxiv.org/abs/1508.05508.
  23. Zhang Z. C., Z. Yu, and T. Liu, “Answer sentence extraction of reading comprehension based on shallow sematic tree kernel,” Journal of Chinese Information Processing, vol. 22, no. 1, pp. 80–86, 2008. View at Google Scholar
  24. T. Kaisheng, S. Richard, and D. M. Christopher, “Improved semantic representations from tree-structured long short-term memory networks,” in Proceedings of the 53th Annual Meeting of the Association for Computational Linguistics, pp. 1556–1566, 2015.