TY - JOUR A2 - Zhang, Yin AU - Zhu, Hu AU - Wang, Ze AU - Shi, Yu AU - Hua, Yingying AU - Xu, Guoxia AU - Deng, Lizhen PY - 2020 DA - 2020/09/23 TI - Multimodal Fusion Method Based on Self-Attention Mechanism SP - 8843186 VL - 2020 AB - Multimodal fusion is one of the popular research directions of multimodal research, and it is also an emerging research field of artificial intelligence. Multimodal fusion is aimed at taking advantage of the complementarity of heterogeneous data and providing reliable classification for the model. Multimodal data fusion is to transform data from multiple single-mode representations to a compact multimodal representation. In previous multimodal data fusion studies, most of the research in this field used multimodal representations of tensors. As the input is converted into a tensor, the dimensions and computational complexity increase exponentially. In this paper, we propose a low-rank tensor multimodal fusion method with an attention mechanism, which improves efficiency and reduces computational complexity. We evaluate our model through three multimodal fusion tasks, which are based on a public data set: CMU-MOSI, IEMOCAP, and POM. Our model achieves a good performance while flexibly capturing the global and local connections. Compared with other multimodal fusions represented by tensors, experiments show that our model can achieve better results steadily under a series of attention mechanisms. SN - 1530-8669 UR - https://doi.org/10.1155/2020/8843186 DO - 10.1155/2020/8843186 JF - Wireless Communications and Mobile Computing PB - Hindawi KW - ER -