Bidirectional Optical Flow NeRF: High Accuracy and High Quality under Fewer Views
Authors: Shuo Chen, Binbin Yan, Xinzhu Sang, Duo Chen, Peng Wang, Xiao Guo, Chongli Zhong, Huaming Wan
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on the Ne RF-LLFF and DTU MVS benchmarks for novel view synthesis tasks with fewer images in different complex real scenes. We further demonstrate the robustness of BOF-Ne RF under different baseline distances on the Middlebury dataset. In all cases, BOF-Ne RF outperforms current state-of-the-art baselines for novel view synthesis and scene geometry estimation. |
| Researcher Affiliation | Academia | Shuo Chen, Binbin Yan*, Xinzhu Sang, Duo Chen, Peng Wang, Xiao Guo, Chongli Zhong, Huaming Wan State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications {shuochen365, yanbinbin, xzsang, chenduo, wps1215,2014212810,zclda,wan huaming }@bupt.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | We test our method on three datasets: Middlebury 2005 Stereo datasets (Hirschmuller and Scharstein 2007), Ne RFLLFF datasets, and DTU MVS datasets. |
| Dataset Splits | No | The paper mentions training and testing but does not specify a validation dataset split or how hyperparameters were tuned using a validation set. |
| Hardware Specification | Yes | We run our experiments on a PC with a 3.7 GHz Intel Core i9-10900K CPU, 32GB RAM, and NVIDIA Ge Force RTX 3090 GPU. |
| Software Dependencies | No | The paper mentions 'Pytorch' and 'adam optimizer' but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | Specifically, our BOF-Ne RF network is a twostage network, where each stage is fully connected by a Re LU network with eight layers and 256 channels. The network model is trained via Pytorch (Paszke et al. 2019) using the adam (Kingma and Ba 2014) optimizer with the learning rate set to 5 10 4. ... The number of ray samples in all networks is set to 64. |