Learning Continuous Depth Representation via Geometric Spatial Aggregator

Authors: Xiaohang Wang, Xuanhong Chen, Bingbing Ni, Zhengyan Tong, Hang Wang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on standard depth map benchmarks, e.g., NYU v2, have demonstrated that the proposed framework achieves significant restoration gain in arbitrary scale depth map superresolution compared with the prior art.
Researcher Affiliation Academia Xiaohang Wang*, Xuanhong Chen*, Bingbing Ni , Zhengyan Tong, Hang Wang, Shanghai Jiao Tong University, Shanghai 200240, China
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our codes are available at https://github.com/nana01219/Geo DSR.
Open Datasets Yes Dataset We select three benchmark RGB-D datasets to evaluate the proposed framework: (1) NYU V2 dataset (Silberman et al. 2012). (2) Lu dataset (Lu, Ren, and Liu 2014). (3) Middlebury dataset (Hirschmuller and Scharstein 2007; Scharstein and Pal 2007).
Dataset Splits No The paper specifies 1000 pairs for training and 449 for evaluation on the NYU v2 dataset, but does not explicitly define a separate 'validation' split distinct from a test set for hyperparameter tuning.
Hardware Specification Yes We use a GeForce RTX 3090ti GPU to train the model, and the whole training process takes about 12 hours.
Software Dependencies No The paper mentions using the Adam optimizer but does not specify versions for programming languages, libraries, or other software dependencies.
Experiment Setup Yes In each stage, we set the initial learning rate as 0.0001, and then divide it by 0.2 every 60 epochs. In both stages, the model is trained with the first 1000 pairs of RGB-D images from the NYU v2 dataset for 200 epochs with a batch size of 1. In the first stage, s is fixed at 8, and in the second stage, s is randomly sampled from the uniform distribution U(1, 16).