NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations

Authors: Yonggan Fu, Ye Yuan, Souvik Kundu, Shang Wu, Shunyao Zhang, Yingyan Celine Lin

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We consider three state-of-the-art (SOTA) GNe RF methods: IBRNet (Wang et al., 2021), MVSNe RF (Chen et al., 2021), and GNT (Wang et al., 2022b), where we adopt their official implementation and load their pretrained models for evaluation. Datasets: We follow the train/test dataset splits adopted by these three GNe RF variants and use both synthetic objects and real scenes from three datasets: three Lambertian objects from Deep Voxels (Sitzmann et al., 2019), eight Realistic Synthetic objects from Ne RF (Mildenhall et al., 2020), and eight complex real-world forward-facing scenes from LLFF (Mildenhall et al., 2019). Regarding the source view selection, we follow each GNe RF variant s default scheme, e.g., select the nearby N views around the target view for IBRNet/GNT. Ne RFool setup: The learning rate η in Eq. (5) is set to 1e-3 and δi is initialized with a uniform distribution U( ϵ, ϵ) and then optimized for 500 iterations.
Researcher Affiliation Collaboration 1School of Computer Science, Georgia Institute of Technology, USA 2Intel Labs, San Diego, USA 3Rice University, USA.
Pseudocode No The paper describes procedural steps for its methods, such as 'iteratively update δi with gradient ascent using an Adam optimizer', but it does not contain any structured pseudocode or algorithm blocks with formal labels like 'Algorithm' or 'Pseudocode'.
Open Source Code Yes Our codes are available at: https://github.com/GATECH-EIC/Ne RFool.
Open Datasets Yes Datasets: We follow the train/test dataset splits adopted by these three GNe RF variants and use both synthetic objects and real scenes from three datasets: three Lambertian objects from Deep Voxels (Sitzmann et al., 2019), eight Realistic Synthetic objects from Ne RF (Mildenhall et al., 2020), and eight complex real-world forward-facing scenes from LLFF (Mildenhall et al., 2019).
Dataset Splits No The paper states: 'We follow the train/test dataset splits adopted by these three GNe RF variants' but does not specify validation splits, exact percentages, or sample counts, or explicitly reference predefined splits that include validation information. It is not clear whether a separate validation set was used or how it was split.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models (e.g., NVIDIA A100, RTX 2080 Ti), CPU models, or memory specifications used for running the experiments. It only mentions 'official implementation' and 'pretrained models'.
Software Dependencies No The paper mentions using an 'Adam optimizer' and implicitly relies on standard machine learning frameworks (likely PyTorch given the context of NeRF research and the affiliated institution), but it does not provide specific version numbers for any software dependencies, libraries, or frameworks (e.g., 'PyTorch 1.9', 'CUDA 11.1').
Experiment Setup Yes Ne RFool setup: The learning rate η in Eq. (5) is set to 1e-3 and δi is initialized with a uniform distribution U( ϵ, ϵ) and then optimized for 500 iterations. We apply the aforementioned adversarial training to GNT s pretraining stage (Wang et al., 2022b) using ϵ=8 and an iteration of 1 for updating δi.