Omnidirectional Image Super-resolution via Bi-projection Fusion

Authors: Jiangang Wang , Yuning Cui, Yawen Li, Wenqi Ren, Xiaochun Cao

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that BPOSR achieves state-of-the-art performance on omnidirectional image super-resolution. Experiments Dataset and Implementation Details We verify the effectiveness of our method using the widely used datasets: ODI-SR (Deng et al. 2021) and SUN360 (Xiao et al. 2012), which contain various types of panoramic scenes.
Researcher Affiliation Academia Jiangang Wang1, Yuning Cui2, Yawen Li3, Wenqi Ren1*, Xiaochun Cao1 1Shenzhen Campus of Sun Yat-sen University 2Technical University of Munich 3Beijing University of Posts and Telecommunications
Pseudocode No The paper describes the overall architecture and components like HSTB, PSTB, and BAFM, and illustrates them with diagrams, but it does not include formal pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/W-JG/BPOSR.
Open Datasets Yes We verify the effectiveness of our method using the widely used datasets: ODI-SR (Deng et al. 2021) and SUN360 (Xiao et al. 2012), which contain various types of panoramic scenes.
Dataset Splits No The paper mentions training and test sets ("The model is trained using 1200 training images of ODI-SR and evaluated on the test sets of ODI-SR and SUN360, both containing 100 images."), but does not specify a separate validation set split or its size/details.
Hardware Specification No The paper does not provide specific details about the hardware used, such as GPU or CPU models, or memory specifications.
Software Dependencies No The paper does not list specific version numbers for software dependencies or libraries used in the implementation.
Experiment Setup Yes The model is trained for 500k iterations with the initial learning rate as 2 10 4, which is halved at 250k, 400k, 450k, and 475k iterations. In our model, K is set to 4, and the number of STL and HSTL is both set to 6. The attention window sizes of HSTB and PSTB are set as 4 16 and 8 8, respectively. The model feature dimension is set to 60, and the rotation magnification in PSTB is set to 3 times.