PDF: Point Diffusion Implicit Function for Large-scale Scene Neural Representation
Authors: Yuhan Ding, Fukun Yin, Jiayuan Fan, Hui Li, Xin Chen, Wen Liu, Chongshan Lu, Gang Yu, Tao Chen
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have demonstrated the effectiveness of our method for large-scale scene novel view synthesis, which outperforms relevant state-of-the-art baselines. |
| Researcher Affiliation | Collaboration | 1 School of Information Science and Technology, Fudan University, China 2 Academy for Engineering and Technology, Fudan University, China 3 Tencent PCG, Shanghai, China |
| Pseudocode | No | The paper describes methods and processes in text but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | Our code and models will be available. |
| Open Datasets | Yes | We use two outdoor large-scale scene datasets, OMMO [15] and Blended MVS [35], to evaluate our model. |
| Dataset Splits | No | The paper mentions 'training views' and 'training data' but does not specify explicit percentages or sample counts for training, validation, and test dataset splits. |
| Hardware Specification | Yes | train on 4 A100 GPUs for around one day" and "20 hours on a single A100 GPU |
| Software Dependencies | No | The paper mentions software like COLMAP and Adam optimizer but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | For point super-resolution diffusion, we set T = 1000, β0 = 10 4, βT = 0.01 and linearly interpolate other β s for all experiments. We use Adam optimizer with learning rate 2 10 4 and train on 4 A100 GPUs for around one day. We train this stage using Adam optimizer with an initial learning rate 5 10 4 for 2 106 iterations about 20 hours on a single A100 GPU. |