doi: 10.3934/ipi.2020046

Adversarial defense via the data-dependent activation, total variation minimization, and adversarial training

1. 

Department of Mathematics, Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, 84112-0090, USA

2. 

Department of Mathematics, University of California, Los Angeles, Los Angeles, CA, 90095, USA

3. 

Department of Mathematics, Duke University, Durham, NC, 27708, USA

Please correspond to wangbaonj@gmail.com

Received  November 2019 Revised  April 2020 Published  August 2020

We improve the robustness of Deep Neural Net (DNN) to adversarial attacks by using an interpolating function as the output activation. This data-dependent activation remarkably improves both the generalization and robustness of DNN. In the CIFAR10 benchmark, we raise the robust accuracy of the adversarially trained ResNet20 from $ \sim 46\% $ to $ \sim 69\% $ under the state-of-the-art Iterative Fast Gradient Sign Method (IFGSM) based adversarial attack. When we combine this data-dependent activation with total variation minimization on adversarial images and training data augmentation, we achieve an improvement in robust accuracy by 38.9$ \% $ for ResNet56 under the strongest IFGSM attack. Furthermore, We provide an intuitive explanation of our defense by analyzing the geometry of the feature space.

Citation: Bao Wang, Alex Lin, Penghang Yin, Wei Zhu, Andrea L. Bertozzi, Stanley J. Osher. Adversarial defense via the data-dependent activation, total variation minimization, and adversarial training. Inverse Problems & Imaging, doi: 10.3934/ipi.2020046
References:
[1]

N. Akhtar and A. Mian, Threat of adversarial attacks on deep learning in computer vision: A survey, IEEE Access, 6 (2018), 14410–14430, arXiv: 1801.00553. doi: 10.1109/ACCESS.2018.2807385.  Google Scholar

[2]

N. Akhtar, J. Liu and A. Mian, Defense Against Universal Adversarial Perturbations, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. doi: 10.1109/CVPR.2018.00357.  Google Scholar

[3]

A. Athalye, N. Carlini and D. Wagner, Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples, International Conference on Machine Learning, 2018. Google Scholar

[4]

A. Athalye, L. Engstrom, A. Ilyas and K. Kwok, Synthesizing Robust Adversarial Examples, International Conference on Machine Learning, 2018. Google Scholar

[5]

Y. Bengio, N. Leonard and A. Courville, Estimating or propagating gradients through stochastic neurons for conditional computation, arXiv preprint, arXiv: 1308.3432, 2013. Google Scholar

[6]

W. Brendel, J. Rauber and M. Bethge, Decision-based adversarial attacks: Reliable attacks against black-box machine learning models, arXiv preprint, arXiv: 1712.04248, 2017. Google Scholar

[7]

N. Carlini and D. A. Wagner, Towards evaluating the robustness of neural networks, IEEE European Symposium on Security and Privacy, (2016), 39–57. doi: 10.1109/SP.2017.49.  Google Scholar

[8]

Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu and J. Li, Boosting Adversarial Attacks with Momentum, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. doi: 10.1109/CVPR.2018.00957.  Google Scholar

[9]

I. J. Goodfellow, J. Shlens and C. Szegedy, Explaining and harnessing adversarial examples, arXiv preprint, arXiv: 1412.6275, 2014. Google Scholar

[10]

K. Grosse, N. Papernot, P. Manoharan, M. Backes and P. McDaniel, Adversarial perturbations against deep neural networks for malware classification, arXiv preprint, arXiv: 1606.04435, 2016. Google Scholar

[11]

A. Guisti, J. Guzzi, D. C. Ciresan, F. L. He, J. P. Rodriguez, F. Fontana, M. Faessler, C. Forster, J. Schmidhuber, G. Di Carlo and et al, A machine learning approach to visual perception of forecast trails for mobile robots, IEEE Robotics and Automation Letters, (2016), 661–667. Google Scholar

[12]

C. Guo, M. Rana, M. Cisse and L. van der Maaten, Countering Adversarial Images Using Input Transformations, International Conference on Learning Representations, 2018. Google Scholar

[13]

K. He, X. Zhang, S. Ren and J. Sun, Deep residual learning for image recognition, In CVPR, (2016), 770–778. Google Scholar

[14]

A. Ilyas, L. Engstrom, A. Athalye and J. Lin, Black-box Adversarial Attacks with Limited Queries and Information, International Conference on Machine Learning, 2018. Google Scholar

[15]

D. Kingma and J. Ba, Adam: A method for stochastic optimization, arXiv preprint, arXiv: 1412.6980, 2014. Google Scholar

[16]

A. Kurakin, I. J. Goodfellow and S. Bengio, Adversarial examples in the physical world, arXiv preprint, arXiv: 1607.02533, 2016. Google Scholar

[17]

K. Lee, K. Lee, H. Lee and J. Shin, A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks, arXiv e-prints, 2018. Google Scholar

[18]

F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu and J. Zhu, Defense against adversarial attacks using high-level representation guided denoiser, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018. doi: 10.1109/CVPR.2018.00191.  Google Scholar

[19]

Y. Liu, X. Chen, C. Liu and D. Song, Delving into Transferable Adversarial Examples and Black-box Attacks, arXiv preprint, arXiv: 1611.02770, 2016. Google Scholar

[20]

Y. Luo, X. Boix, G. Roig, T. A. Poggio and Q. Zhao, Foveation-based Mechanisms Alleviate Adversarial Examples, CoRR, abs/1511.06292, 2015. Google Scholar

[21]

A. Madry, A. Makelov, L. Schmidt, D. Tsipras and A. Vladu, Towards Deep Learning Models Resistant to Adversarial Attacks, International Conference on Learning Representations, 2018. Google Scholar

[22]

S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi and P. Frossard, Universal adversarial perturbations, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. doi: 10.1109/CVPR.2017.17.  Google Scholar

[23]

S.-M. Moosavi-Dezfooli, A. Shrivastava and O. Tuzel, Divide, Denoise, and Defend Against Adversarial Attacks, CoRR, abs/1802.06806, 2018. Google Scholar

[24]

T. Na, J. Hwan Ko and S. Mukhopadhyay, Cascade Adversarial Machine Learning Regularized with A Unified Embedding, International Conference on Learning Representations, 2018. Google Scholar

[25]

N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik and A. Swami, The limitations of deep learning in adversarial settings, IEEE European Symposium on Security and Privacy, (2016), 372–387. doi: 10.1109/EuroSP.2016.36.  Google Scholar

[26]

N. Papernot, P. McDaniel, A. Sinha and M. Wellman, Sok: Towards the science of security and privacy in machien learning, arXiv preprint, arXiv: 1611.03814, 2016. Google Scholar

[27]

N. Papernot, P. McDaniel, X. Wu, S. Jha and A. Swami, Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks, IEEE European Symposium on Security and Privacy, 2016. doi: 10.1109/SP.2016.41.  Google Scholar

[28]

N. Papernot, P. D. McDaniel and I. J. Goodfellow, Transferability in Machine Learning: From Phenomena to Black-box Attacks Using Adversarial Samples, CoRR, abs/1605.07277, 2016. Google Scholar

[29]

A. Prakash, N. Moran, S. Garber, A. DiLillo and J. Storer, Deflecting adversarial attacks with pixel deflection, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018. doi: 10.1109/CVPR.2018.00894.  Google Scholar

[30]

L. RudinS. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D: Nonlinear phenomena, 60 (1992), 259-268.  doi: 10.1016/0167-2789(92)90242-F.  Google Scholar

[31]

P. Samangouei, M. Kabkab and R. Chellappa, Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models, In International Conference on Learning Representations, 2018. Google Scholar

[32]

Z. Shi, B. Wang and S. J. Osher, Error Estimation of Weighted Nonlocal Laplacian on Random Point Cloud, arXiv preprint, arXiv: 1809.08622, 2014. Google Scholar

[33]

Y. Song, T. Kim, S. Nowozin, S. Ermon and N. Kushman, Pixeldefend: Leveraging Generative Models to Understand and Defend Against Adversarial Examples, In International Conference on Learning Representations, 2018. Google Scholar

[34]

C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan and I. Goodfellow, Intriguing properties of neural networks, arXiv preprint, arXiv: 1312.6199, 2013. Google Scholar

[35]

F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh and P. McDaniel, Ensemble Adversarial Training: Attacks and Defenses, International Conference on Learning Representations, 2018. Google Scholar

[36]

B. Wang, X. Luo, Z. Li, W. Zhu, Z. Shi and S. Osher, Deep neural nets with interpolating function as output activation, arXiv preprint, arXiv: 1802.00168, 2018. Google Scholar

[37]

B. Wang, Z. Shi and S. Osher, ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies, Advances in Neural Information Processing Systems, 2019. Google Scholar

[38]

B. Wang and S. Osher, Graph interpolating activation improves both natural and robust accuracies in data-efficient deep learning, arXiv preprint, arXiv: 1907.06800, 2019. Google Scholar

[39]

X. Wu, U. Jang, J. Chen, L. Chen and S. Jha, Reinforcing Adversarial Robustness Using Model Confidence Induced by Adversarial Training, International Conference on Machine Learning, 2018. Google Scholar

[40]

C. Xie, J. Wang, Z. Zhang, Z. Ren and A. Yuille, Mitigating Adversarial Effects Through Randomization, International Conference on Learning Representations, 2018. Google Scholar

show all references

References:
[1]

N. Akhtar and A. Mian, Threat of adversarial attacks on deep learning in computer vision: A survey, IEEE Access, 6 (2018), 14410–14430, arXiv: 1801.00553. doi: 10.1109/ACCESS.2018.2807385.  Google Scholar

[2]

N. Akhtar, J. Liu and A. Mian, Defense Against Universal Adversarial Perturbations, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. doi: 10.1109/CVPR.2018.00357.  Google Scholar

[3]

A. Athalye, N. Carlini and D. Wagner, Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples, International Conference on Machine Learning, 2018. Google Scholar

[4]

A. Athalye, L. Engstrom, A. Ilyas and K. Kwok, Synthesizing Robust Adversarial Examples, International Conference on Machine Learning, 2018. Google Scholar

[5]

Y. Bengio, N. Leonard and A. Courville, Estimating or propagating gradients through stochastic neurons for conditional computation, arXiv preprint, arXiv: 1308.3432, 2013. Google Scholar

[6]

W. Brendel, J. Rauber and M. Bethge, Decision-based adversarial attacks: Reliable attacks against black-box machine learning models, arXiv preprint, arXiv: 1712.04248, 2017. Google Scholar

[7]

N. Carlini and D. A. Wagner, Towards evaluating the robustness of neural networks, IEEE European Symposium on Security and Privacy, (2016), 39–57. doi: 10.1109/SP.2017.49.  Google Scholar

[8]

Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu and J. Li, Boosting Adversarial Attacks with Momentum, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. doi: 10.1109/CVPR.2018.00957.  Google Scholar

[9]

I. J. Goodfellow, J. Shlens and C. Szegedy, Explaining and harnessing adversarial examples, arXiv preprint, arXiv: 1412.6275, 2014. Google Scholar

[10]

K. Grosse, N. Papernot, P. Manoharan, M. Backes and P. McDaniel, Adversarial perturbations against deep neural networks for malware classification, arXiv preprint, arXiv: 1606.04435, 2016. Google Scholar

[11]

A. Guisti, J. Guzzi, D. C. Ciresan, F. L. He, J. P. Rodriguez, F. Fontana, M. Faessler, C. Forster, J. Schmidhuber, G. Di Carlo and et al, A machine learning approach to visual perception of forecast trails for mobile robots, IEEE Robotics and Automation Letters, (2016), 661–667. Google Scholar

[12]

C. Guo, M. Rana, M. Cisse and L. van der Maaten, Countering Adversarial Images Using Input Transformations, International Conference on Learning Representations, 2018. Google Scholar

[13]

K. He, X. Zhang, S. Ren and J. Sun, Deep residual learning for image recognition, In CVPR, (2016), 770–778. Google Scholar

[14]

A. Ilyas, L. Engstrom, A. Athalye and J. Lin, Black-box Adversarial Attacks with Limited Queries and Information, International Conference on Machine Learning, 2018. Google Scholar

[15]

D. Kingma and J. Ba, Adam: A method for stochastic optimization, arXiv preprint, arXiv: 1412.6980, 2014. Google Scholar

[16]

A. Kurakin, I. J. Goodfellow and S. Bengio, Adversarial examples in the physical world, arXiv preprint, arXiv: 1607.02533, 2016. Google Scholar

[17]

K. Lee, K. Lee, H. Lee and J. Shin, A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks, arXiv e-prints, 2018. Google Scholar

[18]

F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu and J. Zhu, Defense against adversarial attacks using high-level representation guided denoiser, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018. doi: 10.1109/CVPR.2018.00191.  Google Scholar

[19]

Y. Liu, X. Chen, C. Liu and D. Song, Delving into Transferable Adversarial Examples and Black-box Attacks, arXiv preprint, arXiv: 1611.02770, 2016. Google Scholar

[20]

Y. Luo, X. Boix, G. Roig, T. A. Poggio and Q. Zhao, Foveation-based Mechanisms Alleviate Adversarial Examples, CoRR, abs/1511.06292, 2015. Google Scholar

[21]

A. Madry, A. Makelov, L. Schmidt, D. Tsipras and A. Vladu, Towards Deep Learning Models Resistant to Adversarial Attacks, International Conference on Learning Representations, 2018. Google Scholar

[22]

S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi and P. Frossard, Universal adversarial perturbations, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. doi: 10.1109/CVPR.2017.17.  Google Scholar

[23]

S.-M. Moosavi-Dezfooli, A. Shrivastava and O. Tuzel, Divide, Denoise, and Defend Against Adversarial Attacks, CoRR, abs/1802.06806, 2018. Google Scholar

[24]

T. Na, J. Hwan Ko and S. Mukhopadhyay, Cascade Adversarial Machine Learning Regularized with A Unified Embedding, International Conference on Learning Representations, 2018. Google Scholar

[25]

N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik and A. Swami, The limitations of deep learning in adversarial settings, IEEE European Symposium on Security and Privacy, (2016), 372–387. doi: 10.1109/EuroSP.2016.36.  Google Scholar

[26]

N. Papernot, P. McDaniel, A. Sinha and M. Wellman, Sok: Towards the science of security and privacy in machien learning, arXiv preprint, arXiv: 1611.03814, 2016. Google Scholar

[27]

N. Papernot, P. McDaniel, X. Wu, S. Jha and A. Swami, Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks, IEEE European Symposium on Security and Privacy, 2016. doi: 10.1109/SP.2016.41.  Google Scholar

[28]

N. Papernot, P. D. McDaniel and I. J. Goodfellow, Transferability in Machine Learning: From Phenomena to Black-box Attacks Using Adversarial Samples, CoRR, abs/1605.07277, 2016. Google Scholar

[29]

A. Prakash, N. Moran, S. Garber, A. DiLillo and J. Storer, Deflecting adversarial attacks with pixel deflection, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018. doi: 10.1109/CVPR.2018.00894.  Google Scholar

[30]

L. RudinS. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D: Nonlinear phenomena, 60 (1992), 259-268.  doi: 10.1016/0167-2789(92)90242-F.  Google Scholar

[31]

P. Samangouei, M. Kabkab and R. Chellappa, Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models, In International Conference on Learning Representations, 2018. Google Scholar

[32]

Z. Shi, B. Wang and S. J. Osher, Error Estimation of Weighted Nonlocal Laplacian on Random Point Cloud, arXiv preprint, arXiv: 1809.08622, 2014. Google Scholar

[33]

Y. Song, T. Kim, S. Nowozin, S. Ermon and N. Kushman, Pixeldefend: Leveraging Generative Models to Understand and Defend Against Adversarial Examples, In International Conference on Learning Representations, 2018. Google Scholar

[34]

C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan and I. Goodfellow, Intriguing properties of neural networks, arXiv preprint, arXiv: 1312.6199, 2013. Google Scholar

[35]

F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh and P. McDaniel, Ensemble Adversarial Training: Attacks and Defenses, International Conference on Learning Representations, 2018. Google Scholar

[36]

B. Wang, X. Luo, Z. Li, W. Zhu, Z. Shi and S. Osher, Deep neural nets with interpolating function as output activation, arXiv preprint, arXiv: 1802.00168, 2018. Google Scholar

[37]

B. Wang, Z. Shi and S. Osher, ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies, Advances in Neural Information Processing Systems, 2019. Google Scholar

[38]

B. Wang and S. Osher, Graph interpolating activation improves both natural and robust accuracies in data-efficient deep learning, arXiv preprint, arXiv: 1907.06800, 2019. Google Scholar

[39]

X. Wu, U. Jang, J. Chen, L. Chen and S. Jha, Reinforcing Adversarial Robustness Using Model Confidence Induced by Adversarial Training, International Conference on Machine Learning, 2018. Google Scholar

[40]

C. Xie, J. Wang, Z. Zhang, Z. Ren and A. Yuille, Mitigating Adversarial Effects Through Randomization, International Conference on Learning Representations, 2018. Google Scholar

Figure 1.  Training and testing procedures of the DNN with softmax and WNLL functions as the output activation layer. (a) and (b) show the training and testing steps for the standard DNN, respectively; (c) and (d) illustrate the training and testing procedure of the WNLL activated DNN, respectively
Figure 2.  Samples from CIFAR10. Panel (a): from the top to the last rows show the original, adversarial images by attacking ResNet56 with FGSM and IFGSM ($ \epsilon = 0.02 $); and by attacking ResNet56-WNLL. Panel (b) corresponding to those in panel (a) with $ \epsilon = 0.08 $. Charts (c) and (d) corresponding to the TV minimized images in (a) and (b), respectively
Figure 3.  $ \epsilon $ v.s. accuracy without defense, and defending by WNLL activation, TVM and augmented training. (a) and (b) plot results for FGSM and IFGSM attack, respectively
Figure 4.  Epochs v.s. accuracy of ResNet56 on CIFAR10. (a): without the additional FC layer; (b): with the additional FC layer
Figure 5.  Visualization of the features learned by DNN with softmax ((a), (b), (c), (d)) and WNLL ((e), (f), (g), (h)) activation functions. (a) and (b) plot the 2D features of the original and adversarial testing images; (c) and (d) are the first two principle components of the 64D features for the original and adversarial testing images, respectively. Charts (e), (f) plot the first two components of the training and testing features learned by ResNet56-WNLL; (g) and (h) show the two principle components of the adversarial and TV minimized adversarial images for the test set
Figure 6.  (a): $ \# $IFGSM iterations v.s. accuracy for the ResNet20 and the ResNet20-WNLL trained with PGD adversarial training. (b):$ \epsilon $ v.s. accuracy for the ResNet20 and the ResNet20-WNLL trained with PGD adversarial training
Table 1.  Running time and GPU memory for ResNet20 with two different activation functions
Training time Testing time Memory
ResNet20 3925.6 (s) 0.657 (s) 1007 (MB)
ResNet20-WNLL 7378.4 (s) 14.09 (s) 1563 (MB)
Training time Testing time Memory
ResNet20 3925.6 (s) 0.657 (s) 1007 (MB)
ResNet20-WNLL 7378.4 (s) 14.09 (s) 1563 (MB)
Table 2.  Mutual classification accuracy on the adversarial images crafted by using FGSM and IFGSM to attack ResNet56 and ResNet56-WNLL. (Unit: $ \% $)
Attack Training data $ \epsilon=0 $ $ \epsilon=0.02 $ $ \epsilon=0.04 $ $ \epsilon=0.06 $ $ \epsilon=0.08 $ $ \epsilon=0.1 $
Accuracy of ResNet56 on adversarial images crafted by attacking ResNet56-WNLL
FGSM Original data 93.0 69.8 56.9 44.6 34.6 28.3
FGSM TVM data 88.3 51.5 37.9 30.1 24.7 20.9
FGSM Original + TVM 93.1 78.5 70.9 64.6 59.8 55.8
IFGSM Original data 93.0 5.22 5.73 6.73 7.55 8.55
IFGSM TVM data 88.3 7.00 6.82 8.30 9.28 10.7
IFGSM Original + TVM 93.1 27.3 28.6 29.5 29.1 29.4
Accuracy of ResNet56-WNLL on adversarial images crafted by attacking ResNet56
FGSM Original data 94.5 65.2 49.0 39.3 32.8 28.3
FGSM TVM data 90.6 45.9 30.9 22.2 16.9 13.8
FGSM Original + TVM data 94.7 78.3 68.2 61.1 56.5 52.5
IFGSM Original data 94.5 3.37 3.71 3.54 4.69 6.41
IFGSM TVM data 90.6 7.88 7.51 7.58 8.07 9.67
IFGSM Original + TVM data 94.7 34.3 33.4 33.1 34.6 35.8
Attack Training data $ \epsilon=0 $ $ \epsilon=0.02 $ $ \epsilon=0.04 $ $ \epsilon=0.06 $ $ \epsilon=0.08 $ $ \epsilon=0.1 $
Accuracy of ResNet56 on adversarial images crafted by attacking ResNet56-WNLL
FGSM Original data 93.0 69.8 56.9 44.6 34.6 28.3
FGSM TVM data 88.3 51.5 37.9 30.1 24.7 20.9
FGSM Original + TVM 93.1 78.5 70.9 64.6 59.8 55.8
IFGSM Original data 93.0 5.22 5.73 6.73 7.55 8.55
IFGSM TVM data 88.3 7.00 6.82 8.30 9.28 10.7
IFGSM Original + TVM 93.1 27.3 28.6 29.5 29.1 29.4
Accuracy of ResNet56-WNLL on adversarial images crafted by attacking ResNet56
FGSM Original data 94.5 65.2 49.0 39.3 32.8 28.3
FGSM TVM data 90.6 45.9 30.9 22.2 16.9 13.8
FGSM Original + TVM data 94.7 78.3 68.2 61.1 56.5 52.5
IFGSM Original data 94.5 3.37 3.71 3.54 4.69 6.41
IFGSM TVM data 90.6 7.88 7.51 7.58 8.07 9.67
IFGSM Original + TVM data 94.7 34.3 33.4 33.1 34.6 35.8
Table 3.  Mutual classification accuracy on the adversarial images crafted by using CW-L2 to attack ResNet56 and ResNet56-WNLL. (Unit: $ \% $)
Training data Original data TVM data Original + TVM data
Exp-Ⅰ 52.1 43.2 80.0
Exp-Ⅱ 59.7 41.1 80.1
Training data Original data TVM data Original + TVM data
Exp-Ⅰ 52.1 43.2 80.0
Exp-Ⅱ 59.7 41.1 80.1
Table 4.  Testing accuracy on the adversarial/TVM adversarial CIFAR10 dataset. The testing accuracy with no defense is in red italic; and the results with all three defenses are in boldface. (Unit: $ \% $)
Training data Original data TVM data Original + TVM data
ResNet56 4.94/32.2 11.8/54.0 15.1/52.4
ResNet56-WNLL 18.3/35.2 15.0/53.9 28/54.5
Training data Original data TVM data Original + TVM data
ResNet56 4.94/32.2 11.8/54.0 15.1/52.4
ResNet56-WNLL 18.3/35.2 15.0/53.9 28/54.5
Table 5.  Testing accuracy on the adversarial/TVM adversarial CIFAR10 dataset. The testing accuracy with no defense is in red italic; and the results with all three defenses are in boldface. (Unit: $ \% $)
Attack Training data $ \epsilon=0 $ $ \epsilon=0.02 $ $ \epsilon=0.04 $ $ \epsilon=0.06 $ $ \epsilon=0.08 $ $ \epsilon=0.1 $
ResNet56
FGSM Original data 93.0 36.9/19.4 29.6/18.9 26.1/18.4 23.1/17.9 20.5/17.1
FGSM TVM data 88.3 27.4/50.4 19.1/47.2 16.6/43.7 15.0/38.9 13.7/35.0
FGSM Original + TVM 93.1 48.6/51.1 42.0/47.6 39.1/44.2 37.1/41.8 35.6/39.1
IFGSM Original data 93.0 0/16.6 0/16.1 0.02/15.9 0.1/15.5 0.25/16.1
IFGSM TVM data 88.3 0.01/43.4 0/42.5 0.02/42.4 0.18/42.7 0.49/42.4
IFGSM Original + TVM 93.1 0.1/38.4 0.09/37.9 0.36/37.9 0.84/37.6 1.04/37.9
ResNet56-WNLL
FGSM Original data 94.5 58.5/26.0 50.1/25.4 42.3/25.5 35.7/24.9 29.2/22.9
FGSM TVM data 90.6 31.5/52.6 24.5/49.6 20.2/45.3 17.3/41.6 14.4/37.5
FGSM Original + TVM 94.7 60.5/ 55.4 56.7/52.0 55.3/48.6 53.2/45.9 50.1/43.7
IFGSM Original data 94.5 0.49/16.7 0.14/17.3 0.3/16.9 1.01/16.6 0.94/16.5
IFGSM TVM data 90.6 0.61/37.3 0.43/36.3 0.63/35.9 0.87/35.9 1.19/35.5
IFGSM Original + TVM 94.7 0.19/38.5 0.3/39.4 0.63/ 40.1 1.26/ 38.9 1.72/ 39.1
Attack Training data $ \epsilon=0 $ $ \epsilon=0.02 $ $ \epsilon=0.04 $ $ \epsilon=0.06 $ $ \epsilon=0.08 $ $ \epsilon=0.1 $
ResNet56
FGSM Original data 93.0 36.9/19.4 29.6/18.9 26.1/18.4 23.1/17.9 20.5/17.1
FGSM TVM data 88.3 27.4/50.4 19.1/47.2 16.6/43.7 15.0/38.9 13.7/35.0
FGSM Original + TVM 93.1 48.6/51.1 42.0/47.6 39.1/44.2 37.1/41.8 35.6/39.1
IFGSM Original data 93.0 0/16.6 0/16.1 0.02/15.9 0.1/15.5 0.25/16.1
IFGSM TVM data 88.3 0.01/43.4 0/42.5 0.02/42.4 0.18/42.7 0.49/42.4
IFGSM Original + TVM 93.1 0.1/38.4 0.09/37.9 0.36/37.9 0.84/37.6 1.04/37.9
ResNet56-WNLL
FGSM Original data 94.5 58.5/26.0 50.1/25.4 42.3/25.5 35.7/24.9 29.2/22.9
FGSM TVM data 90.6 31.5/52.6 24.5/49.6 20.2/45.3 17.3/41.6 14.4/37.5
FGSM Original + TVM 94.7 60.5/ 55.4 56.7/52.0 55.3/48.6 53.2/45.9 50.1/43.7
IFGSM Original data 94.5 0.49/16.7 0.14/17.3 0.3/16.9 1.01/16.6 0.94/16.5
IFGSM TVM data 90.6 0.61/37.3 0.43/36.3 0.63/35.9 0.87/35.9 1.19/35.5
IFGSM Original + TVM 94.7 0.19/38.5 0.3/39.4 0.63/ 40.1 1.26/ 38.9 1.72/ 39.1
[1]

Noah Stevenson, Ian Tice. A truncated real interpolation method and characterizations of screened Sobolev spaces. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5509-5566. doi: 10.3934/cpaa.2020250

2019 Impact Factor: 1.373

Metrics

  • PDF downloads (23)
  • HTML views (163)
  • Cited by (0)

[Back to Top]