Improving the Convergence of Dynamic NeRFs via Optimal Transport
Authors: Sameera Ramasinghe, Violetta Shevchenko, Gil Avraham, Hisham Husain, Anton van den Hengel
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we empirically assess the effectiveness of the proposed regularization method by integrating it into several recent dynamic Ne RF models. Empirically, we demonstrate that the proposed regularizer enhances the performance of various dynamic Ne RFs. Moreover, ablation studies illustrate that our simple regularizer surpasses existing approaches that rely on resource-intensive deep models or costly preprocessing steps. |
| Researcher Affiliation | Industry | Sameera Ramasinghe , Violetta Shevchenko*, Gil Avraham, Hisham Husain, Anton van den Hengel Amazon, Australia |
| Pseudocode | Yes | Algorithm 1 The proposed regularization loss for each optimization iteration. |
| Open Source Code | Yes | Our code is at https://github.com/samgregoost/OTDNe RF/ |
| Open Datasets | Yes | We use the i Phone dataset proposed by Gao et al. (2022), Hyper Ne RF interpolation dataset, and the Hyper Ne RF vrig dataset (see Appendix) proposed by Park et al. (2021b) for evaluation. |
| Dataset Splits | No | The paper describes training data usage (e.g., 'taking only every fourth frame for training') and evaluation methods but does not explicitly provide details on validation dataset splits, such as percentages or specific sample counts. |
| Hardware Specification | No | The paper does not specify any particular hardware components such as GPU or CPU models used for conducting the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | We used β = 0.1. Although augmenting t per scene leads to improved results, we opted to fix it to 0.1 across all scenes and datasets to better demonstrate the robustness of our method. We used 256 or 512 for n and 2048 or 4096 for d, depending on the model size. |