Research Article

Image Super-Resolution Network Based on Feature Fusion Attention

Figure 3

Multipath channel attention and pixel attention modules. The multichannel attention block is on the left side. The input is divided into two brunches after the first convolution layer. The feature flows into two different pooling channels and then concatenated and activated before flow out this block. While the pixel attention module is on the right side, the PA module has two convolution layers and two activation layer connecting in alternative orders.