Adversarial Camera Patch: An Effective and Robust Physical-World Attack on Object Detectors
DOI:
https://doi.org/10.34190/iccws.19.1.2044Keywords:
deep learning, cybersecurity, Deep neural network, Camera-based physical attack, Object detector, Effectiveness, RobustnessAbstract
Physical adversarial attacks present a novel and growing challenge in cybersecurity, especially for systems reliant on physical inputs for Deep Neural Networks (DNNs), such as those found in Internet of Things (IoT) devices. They are vulnerable to physical adversarial attacks where real-world objects or environments are manipulated to mislead DNNs, thereby threatening the operational integrity and security of IoT devices. The camera-based attacks are one of the most practical adversarial attacks, which are easy to implement and more robust than all the other attack methods, and pose a big threat to the security of IoT. This paper proposes Adversarial Camera Patch (ADCP), a novel approach that employs a single-camera patch to launch robust physical adversarial attacks against object detectors. ADCP optimizes the physical parameters of the camera patch using Particle Swarm Optimization (PSO) to identify the most adversarial configuration. The optimized camera patch is then attached to the lens to generate stealthy and robust adversarial samples physically. The effectiveness of the proposed approach is validated through ablation experiments in a digital environment, with experimental results demonstrating its effectiveness even under worst-case scenarios (minimal width, maximum transparency). Notably, ADCP exhibits higher robustness in both digital and physical domains compared to the baseline. Given the simplicity, robustness, and stealthiness of ADCP, we advocate for attention towards the ADCP framework as it offers a means to achieve streamlined, robust, and stealthy physical attacks. Our adversarial attacks pose new challenges and requirements for cybersecurity.Downloads
Published
2024-03-21
Issue
Section
Academic Papers
License
Copyright (c) 2024 Kalibinuer Tiliwalidi, Bei Hui, Chengyin Hu, Jingjing Ge
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.