Research Article

Boosting Adversarial Attacks on Neural Networks with Better Optimizer

Figure 1

Schematic optimization path of MI-FGSM and the proposed AI-FGM. MI-FGSM accumulates the gradients of data points along the optimization path, while Adam accumulates the gradients and the square of the gradients. MI-FGSM adopts an invariable step size, while AI-FGM adopts a decayed step size that leads to better convergence.