Research Article

FNet: A Two-Stream Model for Detecting Adversarial Attacks against 5G-Based Deep Learning Services

Figure 1

Illustration of our two-stream network (FNet). Color code used is as follows: light green = Conv, light blue = batchnorm + tanh, deep blue = batchnorm + ReLU, yellow = avg pooling, orange = max pooling, and purple = fully connected layers. The RGB stream uses original images as input and captures subtle difference like contrast difference and unnatural pixels from the RGB features. The noise stream first obtains noise feature maps through SRM filter layer and leverages the noise features to provide additional evidence for adversarial image detection. Bilinear pooling combines the features extracted by the two streams. Finally, passing the combined features through a decision network, the network generates the predicted label and determines whether the input image is adversarial or not.