Resistance to Fast Gradient Sign Method Using Block Switching Algorithm
DOI:
https://doi.org/10.53819/81018102t7002Keywords:
Block-Switching Algorithm, Cryptographic Strength, Adversarial Attacks, Probability Theory, Encryption Security.Abstract
Traditional ways of protecting against the "Fast Gradient Sign Method" attack usually involve methods like
altering the input data before processing, training systems to recognize harmful inputs, or identifying
harmful inputs directly. However, these traditional methods have a number of shortcomings, including their
limited success, vulnerability to more advanced attacks, difficulty in understanding how they work, and too
much dependence on standard sets of data for testing. By creating a strong protective, the system against
The Fast gradient Sign Technique, the objective of this study is to enhance the resilience of machine
learning algorithms against adversarial attacks while improving their safety and dependability in the highest
level of accuracy and performance. The study is guided by three objectives: to investigate the robustness of
existing Deep Learning algorithms for defense against the Fast Gradient Sign Method; to implement the
block-switching algorithm for defending against the Fast Gradient Sign Method; and to evaluate the
performance metric of the block-switching algorithm for the protection of deep learning models against
adversarial attacks. The study will consider three theories that underpin the block-switching algorithm
including: Avalanche effect, Cryptographic Strength, and Probability theory. The research will use datasets
from the Modified National Institute of Standards and Technology and the Canadian Institute for Advanced
Research. It will select commonly used deep learning models for image classification, such as Residual
Neural Network, Visual Geometry Groups, or Inception, for analysis. The study will employ the Fast
Gradient Sign Method to create adversarial examples for each model within the chosen datasets. The
researcher will then compare each Deep Learning model's performance on the adversarial dataset with the
original dataset to see how resilient each one is against first gradient sign adversarial assaults. To evaluate
these criteria including accuracy, precision, recall, and F1 score will be applied. The research will perform
a sensitivity analysis on the parameters used in the Fast Gradient Sign Method attack generation to
investigate how the attack strength and the number of iterations affect the model's robustness against
adversarial attacks. To perform the sensitivity analysis, the researcher will use Python and a set of test data
in the Tensor Flow library.
References
Afzal, S., Yousaf, M., Afzal, H., Alharbe, N., & Mufti, M. R. (2020). Cryptographic Strength Evaluation of Key Schedule Algorithms. Security and Communication Networks. https://doi.org/10.1155/2020/3189601
Alekseev, E., & Bozhko, A. (2020). Algorithms for switching between block-wise and arithmetic masking. https://eprint.iacr.org/2022/1624.pdf
Ali, K., Qureshi, A. N., Bhatti, M., Sohail, & Hijji, M. (2022). Defending Adversarial Examples by a Clipped Residual U-Net Model. Intelligent Automation & Soft Computing, DOI: 10.32604/iasc.2023.028810
Andrade, C. (2020). Understanding the Difference Between Standard Deviation and Standard Error of the Mean, and Knowing When to Use Which. Indian Journal of Psychological Medicine, 42(4), 409–410. https://doi.org/10.1177/0253717620933419
Anthi, E., Williams, L., Rhode, M., Burnap, P., & Wedgbury, A. (2021). Adversarial attacks on machine learning cybersecurity defences in Industrial Control Systems. Journal of Information Security and Applications. doi: https://doi.org/10.1016/j.jisa.2020.
Athalye, A., Engstrom, L., Ilyas, A., & Kwok, K. (2018). Synthesizing robust adversarial examples. In ICML.
Bai, J., Gao, K., Gong, D., Xia, S., Li, Z., & Liu W. (2022). Hardly Perceptible Trojan Attack against Neural Networks with Bit Flips https://arxiv.org/abs/2207.13417
Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798-1828. https://doi.org/10.1109/TPAMI.2013.50
Bhoge, J. P., & Chatur, P. N. (2014). Avalanche Effect of AES Algorithm. International Journal of Computer Science and Information Technologies, 5(3), 3101-3103. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.659.9331&rep=rep1&type=pdf
Catak, F. O., & Yayilgan, S. Y. (2021). Deep Neural Network Based Malicious Network Activity Detection Under Adversarial Machine Learning Attacks. Communications in Computer and Information Science - Springer Science and Business Media Deutschland GmbH., 1382, 280-291. doi:https://doi.org/10.1007/978-3-030-71711-7_23
Chan, P. P. K., He, Z. M., Li, H., & Hsu, C. C. (2018). Data sanitization against adversarial label contamination based on data complexity. International Journal of Machine Learning and Cybernetics, 9(6), 1039–1052. doi:https://doi.org/10.1007/s13042-016-0629
Chen, J., Jordan, M.I., & Wainwright, M. J. (2020). HopSkipJumpAttack: a query-efficient decision-based attack. In 2020 IEEE symposium on security and privacy (pp. 1277-1294). IEEE
Deng, J., Berg, A. C., Fei-Fei, L., Hutchison, D., Kanade, T., Kittler, J., Kleinberg, J. M., Mattern, F., Mitchell, J. C., Naor, M., Nierstrasz, O., Pandu Rangan, C., Steffen, B., Sudan, M., Terzopoulos, D., Tygar, D., Vardi, M. Y., Weikum, G., & Li, K. (2010). What Does Classifying More Than 10,000 Image Categories Tell Us? In K. Daniilidis, P. Maragos, & N. Paragios (Eds.), Computer Vision – ECCV 2010 (Vol. 6315, pp. 71–84). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-15555-0_6
Dolmatov, V., Baryshkov, D. (2020). Block Cipher “Magma”, RFC 8891. https://doi.org/10.17487/RFC8891
Dong, Y., Su, H., Wu, B., Li, Z., Liu, W., Zhang, T., & Zhu, J (2019). Efficient decision-based black-box adversarial attacks on face recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 7714-7722)
Dunn, C., Moustafa, N., & Turnbull, B. (2020, August 10). Robustness Evaluations of Sustainable Machine Learning Models against Data Poisoning Attacks in the Internet of Things. Sustainability, 12(16), 17. doi:http://dx.doi.org/10.3390/su12166434
Echeverri, C. (2017). Visualization of the Avalanche Effect in CT2. Doctoral dissertation, University of Mannheim. https://www.cryptool.org/assets/ctp/documents/BA_Echeverri.pdf
El Omda, S., & Sergent, S. R. (2023). Standard Deviation. In StatPearls. StatPearls Publishing. http://www.ncbi.nlm.nih.gov/books/NBK574574/
Goodfellow, I.J., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial examples. In Proceedings of the International Conference on Learning Representation ICLR, San Diego, CA, USA.
Grimmett, G. R., & Stirzaker, D. R. (1992). Probability and Random Processes, second edition. Oxford University Press.
Hassan, M. R. (2021, June 15). A Robust Deep-Learning-Enabled Trust-Boundary Protection for Adversarial Industrial IoT Environment. IEEE Internet of Things Journal, 8(12). doi:doi: 10.1109/JIOT.2020.3019225.
He, Z., Li, J., & Li, (2019). An Improved Block Switching Method for Image Compression. In Proceedings of the 2019 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids, Chengdu, China.
Hsu, C.-C., Zhuang, Y.-X., and Lee, C.-Y. (2020). Deep fake image detection based on pairwise learning. Applied Sciences, 10(1), 370
Hu, S., & Cao, Y. (2018). A New Block-Switching Method for Video Compression. In Proceedings of the 2018 IEEE International Conference on Signal Processing, Communications, and Computing, Guangzhou, China, 1-5.
Hutter, M., Tunstall, M. (2019). Constant-time higher-order Boolean-to-arithmetic masking. Journal of Cryptographic Engineering, 9, 173–184. https://doi.org/10.1007/s13389-018-0191-z
Janiesch, C., Zschech, P., & Heinrich, K. (2021). Machine learning and deep learning. Electronic Markets, 31(3), 685–695. doi:https://doi.org/10.1007/s12525-021-00475-2
Killmann,W., & Schindler, W. (2001). AIS 31:A proposal for Functionality classes and evaluation methodology for true (physical)random number generators, Version 3.1[J].Bundesamt fiir Sicherheit in der Informationstechnik(BSI),
Kurakin, A., Goodfellow, I., & Bengio, S. (2016). Adversarial examples in the physical world. arXiv. arXiv:1607.02533.
Kuzlu, M., Fair, C., & Guler, O. (2021). Role of Artificial Intelligence in the Internet of Things (IoT) Cybersecurity. . Discover the Internet of Things, 1(1). doi:https://doi.org/10.1007/s43926-020-00001-4
Kwon, H., Kim, Y., Yoon, H., & Choi, D. (2021). Classification score approach for detecting adversarexamplesmple in deep nenetworkstwork. Multimed Tools App,l 80(7), 10339–10360.
Liao, Q., Zhong, Z., Zhang, Y., Xie, C., & Pu, S. (2018). Defense against adversarial attacks using high-level representation guided denoiser. In Proceedings of the European Conference on Computer Vision (ECCV), 489-504.
Liu, Y., Shi, X., & Chen, J. (2016). An Improved Block-Switching Method for H.264/AVC. Proceedings of the 2016 IEEE International Conference on Information and Automation, Ningbo, China, 791-796.
Liang, M., Chang, Z., Wan, Z., Gan, Y., Schlangen, E., & Šavija, B. (2022, January). Interpretable Ensemble-Machine-Learning models for predicting creep behavior of concrete. Cement and Concrete Composites, 125(104295), 17. doi:https://doi.org/10.1016/j.cemconcomp.2021.104295
Luo, Z. Z. (2020, July 13). Adversarial machine learning based partial-model attack in IoT. Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning, 13-18. doi:https://doi.org/10.1145/3395352.3402619
Mao, C., Gupta, A., Nitin, V., Ray, B., Song, S., Yang, J., & Vondrick, C. (2020). Multitask learning strengthens adversarial robustness. In: European conference on computer vision, 16th European conference, GlasgAugust, August 23–28, in lecture notes in computer science, vol 12347. Springer, Cham, 158–174.
Manikandan, S. (2011). Measures of central tendency: Median and mode. Journal of Pharmacology & Pharmacotherapeutics, 2(3), 214–215. https://doi.org/10.4103/0976-500X.83300
Martinez, E. E. B., Oh, B., Li, F., & Luo, X. (2019). Evading Deep Neural Network and Random Forest Classifiers by Generating Adversarial Samples. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11358, 143-155. doi:https://doi.org/10.1007/978-3-030-18419-3_10
Merenda M, Porcaro C, & Lero D. (2020, April 29). Edge Machine Learning for AI-Enabled IoT Devices: A Review. Sensors (Basel), 20(9), 34. doi:doi: 10.3390/s20092533. PMID: 32365645; PMCID: PMC7273223
Mingkang Z., Tianlong, C., & Wang, Z. (2021). Sparse and imperceptible adversarial attack via a homotopy algorithm. arXiv preprint arXiv:2106.06027
Moosavi-Dezfooli, S.M., Fawzi, A., & Frossard, P. (2016). Deepfool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2574–2582.
Neil, C., Greenewald, K.,, Lee, K., & Manso, G. F. (2020). The computational limits of deep learning initiative on the digital economy research brief. doi:https://doi.org/10.48550/arXiv.2007.05558
Oprea, A. (2021). Machine Learning Integrity and Privacy in Adversarial Environments. 1–2. https://doi.org/10.1145/3450569.3462164
Paje, R. E. J., Sison, A. M., & Medina, R. P. (2019). Multidimensional key RC6 algorithm, in Proceedings of the 3rd International Conference on Cryptography, Security and Privacy—ICCSP’19, pp. 33–38, Kuala Lumpur, Malaysia.
Preneel, B. (2000). “NESSIE project,” in Encyclopedia of Cryptography and Security. Springer, Berlin, Germany.
Qiu, S., Liu, Q., Zhou, S., & Wu, C. (2019). Review of artificial intelligence adversarial attack and defense technologies. Appl Sci, 9(5), 909.
Ramanujam, S., & Karuppiah, M. (2011). Designing an algorithm with a high Avalanche Effect. IJCSNS International Journal of Computer Science and Network Security, 11(1), 106-111. http://paper.ijcsns.org/07_book/201101/20110116.pdf.
Sharif, M. Bhagavatula, S. Bauer, L., & Reiter, M. K. J. (2017). Adversarial generative nets: Neural network attacks on state-of-the-art face recognition. arXiv preprint arXiv:1801.00349, 2 ( 3).
Shi, H., Deng, Y., & Guan, Y. (2011). Analysis of the avalanche effect of the AES S box. In 2011 2nd International Conference on Artificial Intelligence, Management Science and Electronic Commerce (AIMSEC) (pp. 5425-5428). IEEE. https://doi.org/10.1109/AIMSEC.2011.6009935
Simion, E. (2015). The relevance of statistical tests in cryptography. IEEE Security & Privacy, 13 (1), 66–70.
Sulak, F., Doğanaksoy, A., Ege, B., et al. (2010). Evaluation of randomness test results for short sequences[C]//International Conference on Sequences and Their Applications. Springer, Berlin, Heidelberg, 309-319.
Taheri, S., Khormali, A., Salem, M., & Yuan, J. (2020). Developing a Robust Defensive System against Adversarial Examples Using Generative Adversarial Networks . Big Data Cogn. Comput., 4(2), 11. https://doi.org/10.3390/bdcc4020011
.
Taori, R., Kamsetty, A., Chu, B., & Vemuri, N. (2018). Targeted adversarial examples for black box audio systems. arXiv preprint arXiv:1805.07820.
Thacker, J. (2020). The Age of AI: Artificial Intelligence and the Future of Humanity. Zondervan.
Ukrop, M. (2016). Randomness analysis in authenticated encryption systems,” Masarykovauniverzita, Fakultainformatiky, Brno, Czechia,, Ph.D. thesis.
Vinayakumar, R., Alazab, M., Srinivasan, S., Pham, Q.V., Padannayil, S.K., & Simran, K. (2020). A Visualized Botnet Detection System based Deep Learning for the Internet of Things Networks of Smart Cities. IEEE Trans. Ind. Appl.
Wang, X., Wang, S., Chen, P. U., Wang, Y., Kulis, B., Lin, X., & Chin, P. (2020). Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses. Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19).
Xie, Y., Li, Z., Shi, C., Liu, J., Chen, Y., & Yuan, B. (2021). Real-time, Robust and Adaptive Universal Adversarial Attacks Against Speaker Recognition Systems. Journal of Signal Processing Systems, 93(10), 1187–1200. https://doi.org/10.1007/s11265-020-01629-9.