Review Article

A Survey on Adversarial Attack in the Age of Artificial Intelligence

Table 10

Shortcomings of similar reviews.

AuthorMain contentShortcomings

Guofu Li et al. 2018 [126]This article introduces the concepts and types of adversarial machine learning. It mainly reviews the attack and defense methods in the field of deep learning. And several new research directions, such as generative adversarial networks, are presented. The adversarial attack strategies in complex scenarios, such as reinforcement learning and physical attack, are also briefly recommended.This paper mainly introduces the problem of image classification in the field of computer vision. There are few researches in the field of text and malware in the article. The research on the nonconvolutional structure is less, either.

Naveed Akhtar et al. 2018 [127]This paper reviews the adversarial attack work of deep learning in computer vision. It mainly introduces the adversarial attack and defensive measures under deep learning. In addition, the security application literature provides a broader prospect for the research direction of adversarial attack in retrospect.It mainly introduces the serious threat to deep learning models caused by perturbations to images in the field of computer vision. Examples of creating adversarial samples for text and malware classification are only briefly mentioned.

Shilin Qiu et al. 2019 [128]This paper summarizes the latest research progress of adversarial attack and defense techniques in deep learning. It mainly reviews the adversarial attack of the target model in the training stage and the test stage and concludes the application of adversarial attack in the four fields of image, text, cyber space security, and physical world as well as the existing defense methods.The adversarial attack is not analyzed in terms of the result of the attack.

Wei Emma Zhang et al. 2019 [129]This paper is the first to make a comprehensive summary of the text deep neural network model adversarial attacks. It mainly reviews the adversarial attack and deep learning in the field of natural language processing, briefly introduces the defense methods, and discusses some open issues.In this paper, the research of generating textual adversarial examples on DNNs in the field of natural language processing are summarized. However, it seldom introduces the architecture of deep neural network.

Wei Jin et al. 2020 [98]This paper gives a comprehensive introduction to the research of graph neural network adversarial attack algorithm classification and defense strategy classification. The performance of different defense methods under different attacks is also empirically studied, and a repository of representative algorithms is developed.It summarizes the adversarial attack and defense technology of graphic data but does not introduce the adversarial attack of other types of data.

Wenqi Wang et al. 2020 [130]This essay comprehensively summarizes the research of textual adversarial examples in different fields. It mainly concludes the attack and defense classification of DNN in the text. Also, it discusses how to build a robust DNN model through testing and validation.The adversarial attack and defense techniques in the field of images and malicious code are not analyzed.

Han Xu et al. 2020 [131]This paper reviews the attack and defense methods on DNN models for image, graph, and text data. The algorithms and defense strategies for generating adversarial samples of three types of data are reviewed.From the prospective of application field, image classification and natural language processing are introduced comprehensively, while malicious code detection is only briefly mentioned.

Jiliang Zhang et al. 2020 [132]This paper gives a comprehensive summary of the existing adversarial example generation methods. This paper mainly introduces the basic concept of adversarial example, the comparison of different adversarial attack methods and defensive measures.Like most reviews, this paper introduces and compares several typical attack algorithms such as L-BFGS, FGSM, and C&W. There is no introduction to adversarial attacks in the text and malicious code domains.