View Synthesis with Sculpted Neural Points
Authors: Yiming Zuo, Jia Deng
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on common benchmarks including DTU (Jensen et al., 2014), LLFF (Mildenhall et al., 2019), Ne RF-Synthetic (Mildenhall et al., 2020), and Tanks&Temples (Knapitsch et al., 2017), and our method shows better or comparable performance against all baselines. |
| Researcher Affiliation | Academia | Yiming Zuo & Jia Deng Department of Computer Science, Princeton University {zuoym,jiadeng}@princeton.edu |
| Pseudocode | Yes | Algorithm 1 Point Adding |
| Open Source Code | Yes | Code is available at https://github.com/princeton-vl/SNP. |
| Open Datasets | Yes | We evaluate our method on DTU (Jensen et al., 2014), LLFF (Mildenhall et al., 2019; 2020), Ne RF s Realistic Synthetic 360 (Mildenhall et al., 2020), and Tanks&Temples (Knapitsch et al., 2017). |
| Dataset Splits | No | For DTU, we reserve 1 in every 7 images for testing, resulting in 42 training views and 7 test views. For LLFF, we reserve 1/8 of the views for testing. For Ne RF-Synthetic, the training set and test set for each scene contain 100 and 200 images respectively. No explicit validation split information is provided for any dataset. |
| Hardware Specification | Yes | We experiment on a single RTX 3090 GPU, optimizing for 50,000 steps on each scene with a batch size of 1. |
| Software Dependencies | No | We implement our method with Py Torch (Paszke et al., 2019) and Py Torch3D (Ravi et al., 2020). Specific version numbers for these software dependencies are not explicitly provided in the text, only references to their respective papers. |
| Experiment Setup | Yes | We experiment on a single RTX 3090 GPU, optimizing for 50,000 steps on each scene with a batch size of 1... For the point dropout layer, we use a dropout rate pd = 0.5... We use an L1 loss... λT V is set to 0.01... The learning rates for the U-Net, the feature vectors f, the point position p, and the opacity o are set to 10 4, 10 2, 10 4, 10 4, respectively. The rasterization softness hyper-parameter γ is selected to be 10 3... train the model with a batch size of 1 for 50,000 steps for all experiments. |