Recurrent Structure Attention Guidance for Depth Super-resolution
Authors: Jiayi Yuan, Haobo Jiang, Xiang Li, Jianjun Qian, Jun Li, Jian Yang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our approach obtains superior performance compared with state-of-the-art depth super-resolution methods. Our code is available at: https://github.com/Yuanjiayii/DSR RSAG. Experiment Experimental Setting To evaluate the performance of our framework, we conduct sufficient experiments on five datasets: Middlebury (Hirschmuller and Scharstein 2007) & MPI Sintel (Butler et al. 2012): Training dataset consists of 34 RGB/D pairs from Middlebury dataset and 58 RGB/D pairs from MPI Sintel dataset. |
| Researcher Affiliation | Academia | PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education Jiangsu Key Lab of Image and Video Understanding for Social Security School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China {jiayiyuan, jiang.hao.bo, xiang.li.implus, csjqian, junli, csjyang}@njust.edu.cn |
| Pseudocode | No | Not found. The paper describes the architecture and formulations but does not include pseudocode or an algorithm block. |
| Open Source Code | Yes | Our code is available at: https://github.com/Yuanjiayii/DSR RSAG. |
| Open Datasets | Yes | Middlebury (Hirschmuller and Scharstein 2007) & MPI Sintel (Butler et al. 2012): Training dataset consists of 34 RGB/D pairs from Middlebury dataset and 58 RGB/D pairs from MPI Sintel dataset. NYU-v2 (Silberman et al. 2012): Following the widely used data splitting manner, we sample 1000 pairs for training and the rest 449 pairs for testing. |
| Dataset Splits | No | Not found. The paper specifies training and testing splits (e.g., '1000 pairs for training and the rest 449 pairs for testing' for NYU-v2) but does not mention a separate validation split or its size/methodology. |
| Hardware Specification | Yes | The proposed method is implemented using Py Torch with one RTX 2080Ti GPU. |
| Software Dependencies | No | Not found. The paper mentions 'Py Torch' but does not specify its version number or any other software dependencies with versions. |
| Experiment Setup | Yes | To balance the training time and network performance, we set the recurrent steps of the SA blocks as k = 2 in this paper. The loss weights are set as λk = 0.5. The proposed method is implemented using Py Torch with one RTX 2080Ti GPU. During training, we randomly extract patches with stride = {96, 96, 128} for the scale = {4, 8, 16} respectively as ground-truth and use bicubic interpolation to get LR inputs. The training and testing data are normalized to the range [0, 1]. |