Robust Depth Completion with Uncertainty-Driven Loss Functions

Authors: Yufan Zhu, Weisheng Dong, Leida Li, Jinjian Wu, Xin Li, Guangming Shi3626-3634

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method has been tested on KITTI Depth Completion Benchmark and achieved the state-of-the-art robustness performance in terms of MAE, IMAE, and IRMSE metrics.
Researcher Affiliation Academia 1 School of Artificial Intelligence, Xidian University, Xi an 710071, China 2 Lane Dep. of CSEE, West Virginia University, Morgantown WV 26506, USA
Pseudocode No The paper describes network architectures and mathematical formulations but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets Yes The KITTI depth completion benchmark(Uhrig et al. 2017) has 86898 Lidar frames for training, 1000 frames for validation, and 1000 frames for testing.
Dataset Splits Yes The KITTI depth completion benchmark(Uhrig et al. 2017) has 86898 Lidar frames for training, 1000 frames for validation, and 1000 frames for testing.
Hardware Specification Yes Our training is implemented by Pytorch with 5 NVIDIA GTX2080Ti GPUs and set batch-size to 5.
Software Dependencies No The paper mentions 'Pytorch' but does not provide specific version numbers for software dependencies.
Experiment Setup Yes Our training is implemented by Pytorch with 5 NVIDIA GTX2080Ti GPUs and set batch-size to 5. In our current implementation, we have used ADAM (Kingma and Ba 2014) as the optimization algorithm. We have set the learning rate to 1 10 4 when we train our multiscale joint prediction model and 2 10 4 when training uncertainty attention residual learning model. The other parameters are all the same with (β1, β2) = (0.9, 0.999), eps = 1 10 8 and Weight decay = 0.