Research Article

A Novel Time-Incremental End-to-End Shared Neural Network with Attention-Based Feature Fusion for Multiclass Motor Imagery Recognition

Figure 7

The structure of attention model. In the attention model, there are two fully connected layers and and a hyperbolic tangent function tanh. The first fully connected layer calculates from each input feature vector , where is the bias. The hyperbolic tangent function performs a nonlinear transformation on to obtain . And, after the second full connected layer, is obtained. Then, Softmax calculates the weight coefficient of each feature insequences. Therefore, the attention model outputs important features for each input feature vector , which is .