Spherical Pseudo-Cylindrical Representation for Omnidirectional Image Super-resolution
Authors: Qing Cai, Mu Li, Dongwei Ren, Jun Lyu, Haiyong Zheng, Junyu Dong, Yee-Hong Yang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results on public datasets demonstrate the effectiveness of the proposed method as well as the consistently superior performance of our method over most state-of-the-art methods both quantitatively and qualitatively. |
| Researcher Affiliation | Academia | 1 Faculty of Computer Science and Technology, Ocean University of China 2 School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen 3 School of Computer Science and Technology, Harbin Institute of Technology 4 School of Nursing, The Hong Kong Polytechnic University 5 Department of Computing Science, University of Alberta |
| Pseudocode | No | The paper describes procedural steps in text but does not include a formal pseudocode or algorithm block. |
| Open Source Code | No | The paper mentions retraining comparison methods using their open-source codes, but does not provide an explicit statement or link for the open-source code of the proposed method. |
| Open Datasets | Yes | Following previous methods (Yoon et al. 2022), we also choose ODI-SR (Deng et al. 2021) as our training dataset, which contains 1200 training images, 100 validation images, and 100 testing images. We use the ODI-SR and SUN 360 Panorama (Xiao et al. 2012) as our test datasets. |
| Dataset Splits | Yes | Following previous methods (Yoon et al. 2022), we also choose ODI-SR (Deng et al. 2021) as our training dataset, which contains 1200 training images, 100 validation images, and 100 testing images. |
| Hardware Specification | Yes | The whole process is implemented in the PyTorch platform with 4 RTX3090 GPUs, each with 24GB of memory (Please see the supplementary for more details). |
| Software Dependencies | No | The paper mentions 'PyTorch platform' but does not specify its version number or any other software dependencies with specific versions. |
| Experiment Setup | Yes | Following previous works (Deng et al. 2021; Yoon et al. 2022), we train our model for the scales of 8 and 16, and all degraded datasets are obtained using bicubic interpolation. To avoid boundary artifacts between neighboring tiles, following previous work (Deng et al. 2021), an extra Ht/8 is added for neighboring tiles, where Ht denotes the height of each tile. The proposed model is trained by the ADAM optimizer (Kingma and Ba 2014) with a fixed initial learning rate of 10-4. |