Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2016, Article ID 3506261, 11 pages
http://dx.doi.org/10.1155/2016/3506261
Research Article

Fracture Mechanics Method for Word Embedding Generation of Neural Probabilistic Linguistic Model

Institute of Electronics, Chinese Academy of Sciences, Beijing, China

Received 3 January 2016; Revised 16 July 2016; Accepted 26 July 2016

Academic Editor: Trong H. Duong

Copyright © 2016 Size Bi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin, “A neural probabilistic language model,” Journal of Machine Learning Research, vol. 3, pp. 1137–1155, 2003. View at Publisher · View at Google Scholar · View at Scopus
  2. S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning, “A large annotated corpus for learning natural language inference,” in Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP '15), 2015.
  3. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, “Natural language processing (almost) from scratch,” Journal of Machine Learning Research, vol. 3, no. 12, pp. 2493–2537, 2011. View at Google Scholar
  4. J. Turian, L. Ratinov, Y. Bengio, and D. Roth, “A preliminary evaluation of word representations for named-entity recognition,” in Proceedings of the NIPS Workshop on Grammar Induction, Representation of Language and Language Learning, 2009.
  5. S. R. Bowman, C. Potts, and C. D. Manning, “Learning distributed word representations for natural logic reasoning,” in Proceedings of the AAAI Spring Symposium on Knowledge Representation and Reasoning, Stanford, Calif, USA, March 2015.
  6. B. Fortuna, M. Grobelnik, and D. Mladenić, “Visualization of text document corpus,” Informatica, vol. 29, no. 4, pp. 497–502, 2005. View at Google Scholar
  7. F. Morin and Y. Bengio, “Hierarchical probabilistic neural network language model,” in Proceedings of the International Workshop on Artificial Intelligence and Statistics (AISTATS '05), vol. 5, pp. 246–252, Barbados, Caribbean, 2005.
  8. R. Navigli and S. P. Ponzetto, “BabelNet: building a very large multilingual semantic network,” in Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL '10), pp. 216–225, 2010.
  9. P. Wang, Y. Qian, F. K. Soong, L. He, and H. Zhao, “Learning distributed word representations for bidirectional LSTM recurrent neural network,” in Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL '16), San Diego, Calif, USA, June 2016.
  10. A. Mnih and G. Hinton, “Three new graphical models for statistical language modelling,” in Proceedings of the 24th International Conference on Machine Learning (ICML '07), pp. 641–648, June 2007. View at Publisher · View at Google Scholar · View at Scopus
  11. Z. Li, H. Zhao, C. Pang, L. Wang, and H. Wang, A Constituent Syntactic Parse Tree Based Discourse Parser, CoNLL-2016, Shared Task, Berlin, Germany, 2016.
  12. Z. Zhang, H. Zhao, and L. Qin, “Probabilistic graph-based dependency parsing with convolutional neural network,” in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL '16), pp. 1382–1392, Association for Computational Linguistics, Berlin, Germany, August 2016.
  13. R. Wang, M. Utiyama, I. Goto, E. Sumita, H. Zhao, and B.-L. Lu, “Converting continuous-space language models into N-gram language models with efficient bilingual pruning for statistical machine translation,” ACM Transactions on Asian Low-Resource Language Information Process, vol. 15, no. 3, pp. 1–26, 2016. View at Google Scholar
  14. G. Mesnil, A. Bordes, J. Weston, G. Chechik, and Y. Bengio, “Learning semantic representations of objects and their parts,” Machine Learning, vol. 94, no. 2, pp. 281–301, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. Y. Bengio, A. Courville, and P. Vincent, “Representation learning: a review and new perspectives,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1798–1828, 2013. View at Publisher · View at Google Scholar · View at Scopus
  16. G. Salton, A. Wong, and C. S. Yang, “Vector space model for automatic indexing,” Communications of the ACM, vol. 18, no. 11, pp. 613–620, 1975. View at Publisher · View at Google Scholar · View at Scopus
  17. E. D'hondt, S. Verberne, C. Koster, and L. Boves, “Text representations for patent classification,” Computational Linguistics, vol. 39, no. 3, pp. 755–775, 2013. View at Publisher · View at Google Scholar · View at Scopus
  18. C. Blake and W. Pratt, “Better rules, fewer features: a semantic approach to selecting features from text,” in Proceedings of the IEEE International Conference on Data Mining (ICDM '01), pp. 59–66, IEEE, San Jose, Calif, USA, 2001. View at Publisher · View at Google Scholar
  19. M. Mitra, C. Buckley, A. Singhal, and C. Cardie, “An analysis of statistical and syntactic phrases,” in Proceedings of the 5th International Conference Computer-Assisted Information Searching on Internet (RIAO '97), pp. 200–214, Montreal, Canada, 1997.
  20. J. Turian, L. Ratinov, and Y. Bengio, “Word representations: a simple and general method for semi-supervised learning,” in Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL '10), pp. 384–394, July 2010. View at Scopus
  21. M. Sahlgren, “Vector-based semantic analysis: representing word meanings based on random labels,” in Proceedings of the Semantic Knowledge Acquisition and Categorisation Workshop at European Summer School in Logic, Language and Information (ESSLLI XIII '01), Helsinki, Finland, August 2001.
  22. M. Sahlgren, The Word-Space Model: Using Distributional Analysis to Represent Syntagmatic and Paradigmatic Relations between Words in High-Dimensional Vector Spaces, Stockholm University, Stockholm, Sweden, 2006.
  23. P. Cimiano, A. Hotho, and S. Staab, “Learning concept hierarchies from text corpora using formal concept analysis,” Journal of Artificial Intelligence Research, vol. 24, no. 1, pp. 305–339, 2005. View at Google Scholar · View at Scopus
  24. A. Kehagias, V. Petridis, V. G. Kaburlasos, and P. Fragkou, “A comparison of word- and sense-based text categorization using several classification algorithms,” Journal of Intelligent Information Systems, vol. 21, no. 3, pp. 227–247, 2003. View at Publisher · View at Google Scholar · View at Scopus
  25. M. Rajman and R. Besancon, “Stochastic distributional models for textual information retrieval,” in Proceedings of the 9th Conference of the Applied Stochastic Models and Data Analysis (ASMDA '99), pp. 80–85, 1999.
  26. G. E. Hinton, “Learning distributed representations of concepts,” in Proceedings of the 8th Annual Conference of the Cognitive Science Society, pp. 1–12, 1986.
  27. H. Ritter and T. Kohonen, “Self-organizing semantic maps,” Biological Cybernetics, vol. 61, no. 4, pp. 241–254, 1989. View at Publisher · View at Google Scholar · View at Scopus
  28. T. Honkela, V. Pulkki, and T. Kohonen, “Contextual relations of words in grimm tales, analyzed by self-organizing map,” in Proceedings of the Hybrid Neural Systems, 1995.
  29. T. Honkela, “Self-organizing maps of words for natural language processing application,” in Proceedings of the International ICSC Symposium on Soft Computing, 1997.
  30. T. Mikolov, S. W. Yih, and G. Zweig, “Linguistic regularities in continuous space word representations,” in Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT '13), 2013.
  31. W. Xu and A. Rudnicky, “Can artificial neural networks learn language models,” in Proceedings of the International Conference on Statistical Language Processing, pp. 1–13, 2000.
  32. M. I. Mandel, R. Pascanu, D. Eck et al., “Contextual tag inference,” ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), vol. 7S, no. 1, 2011. View at Publisher · View at Google Scholar
  33. A. Bordes, X. Glorot, J. Weston, and Y. Bengio, “Joint learning of words and meaning representations for open-text semantic parsing,” Journal of Machine Learning Research, vol. 22, pp. 127–135, 2012. View at Google Scholar · View at Scopus
  34. F. Huang, A. Ahuja, D. Downey, Y. Yang, Y. Guo, and A. Yates, “Learning representations for weakly supervised natural language processing tasks,” Computational Linguistics, vol. 40, no. 1, pp. 85–120, 2014. View at Publisher · View at Google Scholar · View at Scopus
  35. J. L. Elman, “Finding structure in time,” Cognitive Science, vol. 14, no. 2, pp. 179–211, 1990. View at Publisher · View at Google Scholar · View at Scopus
  36. A. Mnih and G. Hinton, “A scalable hierarchical distributed language model,” in Proceedings of the 22nd Annual Conference on Neural Information Processing Systems (NIPS '08), pp. 1081–1088, December 2008. View at Scopus
  37. G. Mesnil, Y. Dauphin, X. Glorot et al., “Unsupervised and transfer learning challenge: a deep learning approach,” Journal of Machine Learning Research, vol. 27, no. 1, pp. 97–110, 2012. View at Google Scholar
  38. S. G. Kobourov, “Spring embedders and force directed graph drawing algorithms,” in Proceedings of the ACM Symposium on Computational Geometry, Chapel Hill, NC, USA, June 2012.
  39. M. J. Bannister, D. Eppstein, M. T. Goodrich, and L. Trott, “Force-directed graph drawing using social gravity and scaling,” in Graph Drawing, W. Didimo and M. Patrignani, Eds., vol. 7704 of Lecture Notes in Computer Science, pp. 414–425, 2013. View at Publisher · View at Google Scholar
  40. A. Efrat, D. Forrester, A. Iyer, S. G. Kobourov, C. Erten, and O. Kilic, “Force-directed approaches to sensor localization,” ACM Transactions on Sensor Networks, vol. 7, no. 3, article 27, 2010. View at Publisher · View at Google Scholar · View at Scopus
  41. T. Chan, J. Cong, and K. Sze, “Multilevel generalized force-directed method for circuit placement,” in Proceedings of the International Symposium on Physical Design (ISPD '05), pp. 185–192, Santa Rosa, Calif, USA, April 2005. View at Scopus
  42. E. H. Huang, R. Socher, C. D. Manning, and A. Y. Ng, “Improving word representations via global context and multipleword prototypes,” in Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL '12), pp. 873–882, July 2012. View at Scopus
  43. T. Mikolov, M. Karafiát, L. Burget, J. Cernocký, and S. Khudanpur, “Recurrent neural network based language model,” in Proceedings of the INTERSPEECH, 2010.