Underwater Ranker: Learn Which Is Better and How to Be Better

Authors: Chunle Guo, Ruiqi Wu, Xin Jin, Linghao Han, Weidong Zhang, Zhi Chai, Chongyi Li

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the state-of-the-art performance of our method. The key designs of our method are discussed. Our code and dataset are available at https://lichongyi.github.io/URanker files/.
Researcher Affiliation Collaboration 1TMCC, CS, Nankai University 2School of Information Engineering, Henan Institute of Science and Technology 3Hisilicon Technologies Co. Ltd. 4S-Lab, Nanyang Technological University
Pseudocode No The paper describes the proposed URanker and NU2Net architectures using text and diagrams (Figure 2, 3, 4, 5), but it does not include any pseudocode or algorithm blocks.
Open Source Code Yes Our code and dataset are available at https://lichongyi.github.io/URanker files/.
Open Datasets Yes To train the ranking-based UIQA network, we construct an underwater image with rank dataset, called URanker Set. [...] Our code and dataset are available at https://lichongyi.github.io/URanker files/.
Dataset Splits No Following the experimental settings of (Li et al. 2021), we randomly select 800 image groups in URanker Set for training and the rest 90 groups are regarded as the testing set. Such settings are used for UIQE and UIE experiments. The paper specifies train and test sets but does not explicitly mention a separate validation set.
Hardware Specification Yes All experiments are implemented by Py Torch on an NVIDIA Quadro RTX 8000 GPU.
Software Dependencies No The paper states 'All experiments are implemented by Py Torch' and 'the code implemented by Mind Spore framework is also provided' but does not specify version numbers for these software dependencies.
Experiment Setup Yes We train our URanker for 100 epochs with the Adam optimizer with default parameters (β1 = 0.9, β2 = 0.999) and the fixed learning rate 1.0 10 5. For data augmentation, the input images are randomly flipped with a probability of 0.5 in both vertical and horizontal directions. The proposed NU2Net is trained for 250 epochs with a batch size of 16. Adam optimizer with an initial learning rate of 0.001 is adopted. The learning rate is adjusted by the cosine annealing strategy. All inputs are cropped into a size of 256 256 and the same data augmentation as training URanker is employed.