IEBins: Iterative Elastic Bins for Monocular Depth Estimation
Authors: Shuwei Shao, Zhongcai Pei, Xingming Wu, Zhong Liu, Weihai Chen, Zhengguo Li
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the KITTI, NYU-Depth-v2 and SUN RGB-D datasets demonstrate that the proposed method surpasses prior state-of-the-art competitors. |
| Researcher Affiliation | Academia | 1School of Automation Science and Electrical Engineering, Beihang University, China 2School of Electrical Engineering and Automation, Anhui University, China 3SRO department, Institute for Infocomm Research, A*STAR, Singapore |
| Pseudocode | No | The paper does not contain pseudocode or clearly labeled algorithm blocks. It provides mathematical equations for the GRU but not an algorithm. |
| Open Source Code | Yes | The source code is publicly available at https://github.com/Shuwei Shao/IEBins. |
| Open Datasets | Yes | Extensive experiments on the KITTI [22], NYU-Depth-v2 [23] and SUN RGB-D [24] datasets... KITTI is an outdoor dataset... NYU-Depth-v2 is an indoor dataset... SUN RGB-D is collected from indoor scenes... |
| Dataset Splits | Yes | KITTI... The latter consists of 85898 training images, 1000 validation images and 500 test images without the depth ground-truth. NYU-Depth-v2... which involves 36253 images for training and 654 images for testing. |
| Hardware Specification | Yes | Our framework is implemented in the Py Torch library [50] and trained on 4 NVIDIA A5000 24GB GPUs. |
| Software Dependencies | No | Our framework is implemented in the Py Torch library [50] and trained on 4 NVIDIA A5000 24GB GPUs. We utilize the Adam optimizer [51]... The paper mentions PyTorch and Adam optimizer but does not specify their version numbers or other software dependencies with versions. |
| Experiment Setup | Yes | The training process runs a total number of 20 epochs and takes around 24 hours. We utilize the Adam optimizer [51] and a batch size of 8. The learning rate is gradually reduced from 2e-5 to 2e-6 via the polynomial decay strategy. |