DesNet: Decomposed Scale-Consistent Network for Unsupervised Depth Completion
Authors: Zhiqiang Yan, Kun Wang, Xiang Li, Zhenyu Zhang, Jun Li, Jian Yang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show the superiority of our method on outdoor KITTI benchmark, ranking 1st and outperforming the best KBNet more than 12% in RMSE. In addition, our approach achieves state-of-the-art performance on indoor NYUv2 dataset. |
| Researcher Affiliation | Academia | PCA Lab, Nanjing University of Science and Technology, China {Yanzq,kunwang,xiang.li.implus,junli,csjyang}@njust.edu.cn, zhangjesse@foxmail.com |
| Pseudocode | No | The paper describes methods and processes but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete statement or link regarding the availability of its source code. |
| Open Datasets | Yes | KITTI benchmark (Uhrig et al. 2017) consists of 86,898 RGB-D pairs for training, 7,000 for validating, and another 1,000 for testing. The official 1,000 validation images are used during training while the remaining images are ignored. NYUv2 dataset (Silberman et al. 2012) contains 464 RGBD indoor scenes with 640 480 resolution. |
| Dataset Splits | Yes | KITTI benchmark (Uhrig et al. 2017) consists of 86,898 RGB-D pairs for training, 7,000 for validating, and another 1,000 for testing. The official 1,000 validation images are used during training while the remaining images are ignored. Following KBNet (Wong and Soatto 2021), we train our model on 46K frames and test on the official test set with 654 images. |
| Hardware Specification | Yes | We implement Des Net on Pytorch with 2 TITAN RTX GPUs. |
| Software Dependencies | No | The paper mentions "Pytorch" and "Adam (Kingma and Ba 2014) optimizer" but does not specify version numbers for these or other software dependencies. |
| Experiment Setup | Yes | We train it for 25 epochs with Adam (Kingma and Ba 2014) optimizer. The learning rate is gradually warmed up to 10 4 in 3 steps, where each step increases learning rate by 10 4/3 in 500 iterations. After that, the learning rate 10 4 is used for the first 20 epochs and is reduced to half at the beginning of the 20th epoch. |