Distortion and Uncertainty Aware Loss for Panoramic Depth Completion
Authors: Zhiqiang Yan, Xiang Li, Kun Wang, Shuo Chen, Jun Li, Jian Yang
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show the superiority of our method over standard loss functions, reaching the state of the art. |
| Researcher Affiliation | Academia | 1PCALab, School of Computer Science and Engineering, Nanjing University of Science and Technology, China 2RIKEN Center for Advanced Intelligence Project, Japan. Correspondence to: Shuo Chen <shuo.chen.ya@riken.jp>, Jun Li <junli@njust.edu.cn>. |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not explicitly state that source code is released or provide a link to a code repository. |
| Open Datasets | Yes | Following M3PT (Yan et al., 2022), we train the model on Matterport3D (Albanis et al., 2021) and 3D60 (Zioulis et al., 2019) datasets with 512 256 resolution. |
| Dataset Splits | No | Matterport3D is composed of 7,907 RGB-D panoramas, 5,636 for training and 1,527 for testing. For 3D60, there are 6,669 RGB-D pairs for training and 1,831 for testing. |
| Hardware Specification | Yes | The whole training process is implemented on Pytorch with a single NVIDIA TITAN V GPU. |
| Software Dependencies | No | The paper mentions 'Pytorch' but does not specify its version or other software dependencies with version numbers. |
| Experiment Setup | Yes | Adam W optimizer is used with β1 = 0.9, β2 = 0.999 and weight decay 0.05. We train the model for 80 epoches with batch size 16 and initial learning rate 5 10 4, which drops by half every 20 epoches. Color jittering and random horizontal flip are used. µ and η are 80 and 0.5 respectively. |