FCFR-Net: Feature Fusion based Coarse-to-Fine Residual Learning for Depth Completion
Authors: Lina Liu, Xibin Song, Xiaoyang Lyu, Junwei Diao, Mengmeng Wang, Yong Liu, Liangjun Zhang2136-2144
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the performance of our method against different state-of-the-art (So TA) methods on diverse publicly available datasets, including the KITTI and NYUDv2 dataset. |
| Researcher Affiliation | Collaboration | Lina Liu1,2, Xibin Song2,3, Xiaoyang Lyu1, Junwei Diao1, Mengmeng Wang1, Yong Liu1 and Liangjun Zhang2,3 1Institute of Cyber-Systems and Control, Zhejiang University, China 2Baidu Research, China 3National Engineering Laboratory of Deep Learning Technology and Application, China |
| Pseudocode | No | The paper describes methods in text and figures, but no explicitly labeled pseudocode or algorithm blocks are provided. |
| Open Source Code | No | The paper does not contain an explicit statement of code release or a link to a public repository for the described methodology. |
| Open Datasets | Yes | KITTI Dataset and Implementation Details The KITTI dataset (Geiger et al. 2013) is a large outdoor dataset for autonomous driving, which contains 85k color images and corresponding sparse depth maps for training, 6k for validation, and 1k for testing. (...) NYUDv2 Dataset and Implementation Details The NYUDv2(Silberman et al. 2012) dataset is comprised of video sequences from a variety of indoor scenes as recorded by both the color and depth cameras from the Microsoft Kinect. |
| Dataset Splits | Yes | The KITTI dataset (Geiger et al. 2013) is a large outdoor dataset for autonomous driving, which contains 85k color images and corresponding sparse depth maps for training, 6k for validation, and 1k for testing. In validation, 1000 color images and corresponding sparse depth maps are selected as validation data. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Adam optimizer' but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | All models are trained with Adam optimizer with β1=0.9, β2=0.999. We set batch size as 8, the learning rate starts from 1e-5 and reduces by 0.1 for every 10 epochs. The p in the loss function is set to 2. The models are trained for 20 epochs. |