Research Article

Boosting Adversarial Attacks on Neural Networks with Better Optimizer

Figure 2

Classification of a normal image and corresponding adversarial example by Inc-v3 and Inc-v4. The first row shows the top 10 confidence distributions for a clean image, for which both models provided a correct prediction. The second row shows the top 10 confidence distributions for the adversarial example generated for Inc-v3 by AI-FGM. It is evident that the adversarial example successfully attacked Inc-v3 (white-box) and Inc-v4 (black-box) with high confidence. (a) Clean. (b) Adversarial.
(a)
(b)