RePaint-NeRF: NeRF Editting via Semantic Masks and Diffusion Models
Authors: Xingchen Zhou, Ying He, F. Richard Yu, Jianqiang Li, You Li
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiment results show that our algorithm is effective for editing 3D objects in Ne RF under different text prompts, including editing appearance, shape, and more. We validate our method on both real-world datasets and synthetic-world datasets for these editing tasks. |
| Researcher Affiliation | Academia | 1College of Computer Science and Software Engineering, Shenzhen University 2Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ) 3National Engineering Laboratory for Big Data System Computing Technology, Shenzhen University |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states: 'Please visit https: //repaintnerf.github.io for a better view of our results.' This URL is explicitly for viewing results, not for providing access to the source code of the methodology. There is no explicit statement about code release. |
| Open Datasets | Yes | We use Local Light Field Fusion (LLFF) [Mildenhall et al., 2019] and Blender for testing. ... The LLFF [Mildenhall et al., 2019] is collected from the real world... The Blender comes from the synthetic world... |
| Dataset Splits | No | The paper does not explicitly mention a validation dataset or a validation split percentage for its experiments. |
| Hardware Specification | Yes | We test our method on a single NVIDIA RTX 3090 GPU. |
| Software Dependencies | No | The paper mentions software components such as Adam, Stable Diffusion, CLIP, Instant-NGP, LSeg, and Mi DAS, but it does not specify version numbers for any of these software dependencies. |
| Experiment Setup | Yes | We use Adam [Kingma and Ba, 2014] to optimize our Ne RF model in the first stage, with a learning rate 1e-2 and batch size 4096. ... We train these two phases using Adan with a learning rate of 1e-3 decaying to 1e-4 and a batch size of 1. These two phases are optimized for 3000 steps and 10000 steps respectively |