BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection
Authors: Saket S. Chaturvedi, Lan Zhang, Wenbin Zhang, Pan He, Xiaoyong Yuan
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Evaluation In this section, we first detail our experimental framework (dataset, implementation & training details, evaluation metrics) and then present the evaluation results of the proposed Bad Fusion. We further demonstrate the effectiveness of Bad Fusion against mainly Point-line Camera-to-Li DAR fusion methods in 3D object detection and also benchmark our approach against three state-of-the-art backdoor detection methods. Lastly, we conduct an ablation study to elucidate the internal mechanics of the Bad Fusion. |
| Researcher Affiliation | Academia | 1Clemson University 2Florida International University 3Auburn University |
| Pseudocode | Yes | Algorithm 1 Algorithm Procedure of Bad Fusion |
| Open Source Code | No | The paper states: 'We implement Untar OD based on their open-source code3' with a link to a third-party repository. There is no explicit statement or link indicating that the code for their proposed method (Bad Fusion) is open-source or publicly available. |
| Open Datasets | Yes | We use the KITTI dataset [Geiger et al., 2013] in the evaluation. |
| Dataset Splits | Yes | we split the training data into a train set and a validation set with 3, 712 and 3, 769 samples, respectively, following the train/valid split process in previous work [Chen et al., 2016]. |
| Hardware Specification | No | The paper does not explicitly specify the hardware used for running the experiments (e.g., GPU/CPU models, memory specifications). |
| Software Dependencies | No | The paper mentions software components like 'Focal Loss', 'Smooth L1Loss', 'Adam W optimizer', and 'mmdetection3d' but does not provide specific version numbers for these or other key software dependencies required for replication. |
| Experiment Setup | Yes | The fusion models are trained using an Adam W optimizer with a learning rate of 0.002 and a weight decay parameter of 0.01 for 70 epochs. |