Abstract
3D object detection based on deep neural networks (DNNs) has widely been adopted in the field of embedded applications, such as autonomous driving. Nonetheless, recent studies have demonstrated that LiDAR data tends to exhibit intense corruptions, resulting in the failure of 3D object detection tasks. Considering the vulnerability of existing DNNs and the wide applications of 3D object detection to safety-critical scenarios, the robustness of deep 3D detection models under adversarial attacks is investigated in this work. The proposed universal adversarial attack is encoded into a perturbation voxel, which adds point-wise perturbations to benign LiDAR scenes. The detector-level perturbation voxel is generated by suppressing the detector's predictions on training samples, which covers the entire perceptual range of the detector. The designed perturbation voxels can be applied to the entire scene to simulate the global perturbation inherent in LiDAR. And it can adapt to other detectors with various point cloud representations, which makes attack universal. To test the effectiveness of the proposed attack, it was launched against a number of deep 3D detectors using several datasets. The results demonstrate its superiority over existing art.