PruNeRF: Segment-Centric Dataset Pruning via 3D Spatial Consistency

Authors: Yeonsung Jung, Heecheol Yun, Joonhyung Park, Jin-Hwa Kim, Eunho Yang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on benchmark datasets demonstrate that Pru Ne RF consistently outperforms state-of-the-art methods in robustness against distractors. In this section, we evaluate our method across various benchmark datasets to assess its robustness against the presence of distractors. We describe our experimental setup in Section 4.1. Then, we present the evaluation results in Section 4.2 and the ablation study in Section 4.3.
Researcher Affiliation Collaboration 1Graduate School of AI, Korea Advanced Institute of Science and Technology (KAIST), Republic of Korea 2NAVER AI Lab, Republic of Korea 3AI Institute of Seoul National University, Republic of Korea 4AITRICS, Republic of Korea.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not explicitly state that its source code is available or provide a link to it.
Open Datasets Yes For the synthetic datasets, we utilize Kubric datasets (Wu et al., 2022)... For the natural datasets, we use Pick (Wu et al., 2022) and Statue, Android, Baby Yoda (Sabour et al., 2023).
Dataset Splits Yes For validation, the camera rotates around the keyframe center, generating 100 views that only display the static background.
Hardware Specification No The paper does not explicitly provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., specific library versions or solver versions).
Experiment Setup Yes The model is trained for 250k iterations with a batch size of 16,384. We employ the Adam optimizer (Kingma & Ba, 2014) with hyperparameters of β1 = 0.9, β2 = 0.999, and ϵ = 10 6. The initial learning rate is set to 2 10 3 and exponentially decayed to 2 10 6. The first 512 iterations are used for warm-up.