• Journal of Internet Computing and Services
    ISSN 2287 - 1136 (Online) / ISSN 1598 - 0170 (Print)
    https://jics.or.kr/

A Study on Effective Adversarial Attack Creation for Robustness Improvement of AI Models


Si-on Jeong, Tae-hyun Han, Seung-bum Lim, Tae-jin Lee, Journal of Internet Computing and Services, Vol. 24, No. 4, pp. 25-36, Aug. 2023
10.7472/jksii.2023.24.4.25, Full Text:
Keywords: Artificial intelligence, robustness, Adversarial attack

Abstract

Today, as AI (Artificial Intelligence) technology is introduced in various fields, including security, the development of technology is accelerating. However, with the development of AI technology, attack techniques that cleverly bypass malicious behavior detection are also developing. In the classification process of AI models, an Adversarial attack has emerged that induces misclassification and a decrease in reliability through fine adjustment of input values. The attacks that will appear in the future are not new attacks created by an attacker but rather a method of avoiding the detection system by slightly modifying existing attacks, such as Adversarial attacks. Developing a robust model that can respond to these malware variants is necessary. In this paper, we propose two methods of generating Adversarial attacks as efficient Adversarial attack generation techniques for improving Robustness in AI models. The proposed technique is the XAI-based attack technique using the XAI technique and the Reference based attack through the model's decision boundary search. After that, a classification model was constructed through a malicious code dataset to compare performance with the PGD attack, one of the existing Adversarial attacks. In terms of generation speed, XAI-based attack, and reference-based attack take 0.35 seconds and 0.47 seconds, respectively, compared to the existing PGD attack, which takes 20 minutes, showing a very high speed, especially in the case of reference-based attack, 97.7%, which is higher than the existing PGD attack's generation rate of 75.5%. Therefore, the proposed technique enables more efficient Adversarial attacks and is expected to contribute to research to build a robust AI model in the future.


Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from November 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article
[APA Style]
Jeong, S., Han, T., Lim, S., & Lee, T. (2023). A Study on Effective Adversarial Attack Creation for Robustness Improvement of AI Models. Journal of Internet Computing and Services, 24(4), 25-36. DOI: 10.7472/jksii.2023.24.4.25.

[IEEE Style]
S. Jeong, T. Han, S. Lim, T. Lee, "A Study on Effective Adversarial Attack Creation for Robustness Improvement of AI Models," Journal of Internet Computing and Services, vol. 24, no. 4, pp. 25-36, 2023. DOI: 10.7472/jksii.2023.24.4.25.

[ACM Style]
Si-on Jeong, Tae-hyun Han, Seung-bum Lim, and Tae-jin Lee. 2023. A Study on Effective Adversarial Attack Creation for Robustness Improvement of AI Models. Journal of Internet Computing and Services, 24, 4, (2023), 25-36. DOI: 10.7472/jksii.2023.24.4.25.