A method to generate adversarial examples based on color variety of adjacent pixels

Open Access
Article
Conference Proceedings
Authors: Tomoki KamegawaMasaomi KimuraImam MukhlashMohammad Iqbal

Abstract: Deep neural networks have improved the performance of large-scale learning tasks such as image recognition and speech recognition. However, neural networks also have vulnerabilities. Adversarial examples are generated by adding perturbations to images to cause incorrect predictions of image classifiers. The well-known perturbation attack is JSMA, which is relatively fast to generate perturbation and requires only simple procedures and is widely used in cybersecurity, anomaly detection and intrusion detection. However, there are problems with the way to perturb pixels. JSMA’s perturbations are easily perceivable by the human eyes because JSAM adds large perturbations to pixels. Some previous methods to generate adversarial examples did not assume that adversarial examples are checked by human eyes and allow larger perturbation to be adding to a single pixel. However, in situations where a deep learning model causes significant damage if it misrecognizes an input, a visual check by a human is necessary. In such cases, adversarial examples should not only cause misclassification in the image classifier system but also require less perturbation to avoid human perception of the perturbation. We propose methods to improve the JSMA problems. Specifically, it adjusts the amount of perturbation by calculating the variance between the value of the pixel to be perturbed and its surrounding pixels. If a large perturbation is added to the area of an image with a large pixel value variation, the perturbation will be imperceptible. In such case, perceivability does not increase significantly with a slightly larger perturbation. In contrast, if the large perturbation is added to the area of an image with small pixel value variation, the perturbation will be more perceptible. In such case, perturbations must be small. In our previous study, we assumed thresholds to classify the perturbations into two classes, large perturbation and small perturbation. If the variance was larger than the threshold, a larger perturbation was added; if the variance was smaller than the threshold, a smaller perturbation was added, which achieved a reduction in the amount of perturbation. However, there were still rooms of improvements of the perturbation to reduce the perceptibility. In this study, we focused on that there were differences in the perception of perturbations depending on the color of the pixel. The amount of perturbation should vary from pixel to pixel, not a fixed amount. Not only the variance of the surrounding pixels but also the variance of a larger area is calculated. By using these ratios, the amount of perturbation is varied from pixel to pixel. Experimental results using cifar-10 showed that the proposed method reduced the amount of perturbation to pixels with a misclassification success rate comparable to that of JSMA and our past method. We also confirmed that the reduced perturbation made the perturbation less perceptible.

Keywords: Adversarial examples, Deep learning, Image recognition, Neural network

DOI: 10.54941/ahfe1004184

Cite this paper:

Downloads
165
Visits
348
Download