Unsupervised Underwater Image Restoration: From a Homology Perspective

Authors: Zhenqi Fu, Huangxing Lin, Yan Yang, Shu Chai, Liyan Sun, Yue Huang, Xinghao Ding643-651

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that USUIR achieves promising performance in both inference time and restoration quality." and "We conduct experiments on two real-world underwater image restoration datasets. We compare our method with six baseline methods in terms of full-reference and no-reference quality assessment metrics.
Researcher Affiliation Academia 1 School of Informatics, Xiamen University, China 2 College of Computer Science & Technology, Hangzhou Dianzi University, China
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at: https://github.com/zhenqifu/USUIR
Open Datasets Yes We conduct experiments on two underwater image restoration datasets, i.e., UIEBD (Li et al. 2019b) and RUIE (Liu et al. 2020).
Dataset Splits No The paper describes training and testing splits for the datasets (e.g., 'we apply the first 700 images for training and the rest 190 images for testing on the UIEBD dataset'), but it does not explicitly define a separate validation set with specific proportions or counts.
Hardware Specification Yes We conduct experiments on an NVIDIA RTX 2080Ti GPU in PyTorch." and "We measure the inference time of different methods averaged on 100 images of size 512 × 512 on an NVIDIA RTX 2080Ti GPU and Intel(R) Xeon(R) E5-2678 v3 @ 2.50GHz CPU.
Software Dependencies No The paper mentions 'PyTorch' as the framework used ('We conduct experiments on an NVIDIA RTX 2080Ti GPU in PyTorch.'), but it does not specify any version numbers for PyTorch or other software dependencies.
Experiment Setup Yes We employ the ADAM optimizer to optimize USUIR, the default learning rate is 1e-4, the batch size is 1, and the maximal epoch is 50. We augment the training data with rotation, flipping horizontally and vertically. The training images are resized to 128 × 128.