Structure Flow-Guided Network for Real Depth Super-resolution
Authors: Jiayi Yuan, Haobo Jiang, Xiang Li, Jianjun Qian, Jun Li, Jian Yang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on real and synthetic DSR datasets verify that our approach achieves excellent performance compared to state-of-the-art methods. |
| Researcher Affiliation | Academia | PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education Jiangsu Key Lab of Image and Video Understanding for Social Security School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China {jiayiyuan, jiang.hao.bo, xiang.li.implus, csjqian, junli, csjyang}@njust.edu.cn |
| Pseudocode | No | The paper provides network architectures and mathematical formulations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at: https://github.com/Yuanjiayii/DSR SFG. |
| Open Datasets | Yes | To evaluate the performance of our method, we perform extensive experiments on real-world RGB-D-D dataset (He et al. 2021), To FMark dataset (Ferstl et al. 2013) and synthetic NYU-v2 dataset (Silberman et al. 2012). |
| Dataset Splits | No | During training, we randomly crop patches of resolution 256 256 as groundtruth and the training and testing data are normalized to the range [0, 1]. Following FDSR (He et al. 2021), we first use 2215 hand-filled RGB/D pairs for training and 405 RGB/D pairs for testing. and we sample 1000 RGB-D pairs for training and the rest 449 RGB-D pairs for testing. The paper specifies training and testing splits but does not explicitly mention a separate validation split. |
| Hardware Specification | Yes | We implement our model with Py Torch and conduct all experiments on a server containing an Intel i5 2.2 GHz CPU and a TITAN RTX GPU with almost 24 GB. |
| Software Dependencies | No | The paper states 'We implement our model with Py Torch' but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | During training, we randomly crop patches of resolution 256 256 as groundtruth and the training and testing data are normalized to the range [0, 1]. In order to balance the training time and network performance, the parameters L, L , K, T are set to 3, 6, 3, 2 in this paper. |