Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Adaptive Dual-domain Learning for Underwater Image Enhancement

Authors: Lintao Peng, Liheng Bian

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments validate that the SS-UIE technique outperforms state-of-the-art UIE methods while requiring cheaper computational and memory costs. (...) Experiments Experiment Setup Datasets. We utilize the datasets UIEB (Li et al. 2020) and LSUI (Peng, Zhu, and Bian 2023) to evaluate our model. (...) Quantitative comparison of different UIE methods on the UIEB, LSUI and U45 datasets. The best results are highlighted in bold and the second best results are underlined.
Researcher Affiliation Academia Beijing Institute of Technology, Beijing & Zhuhai, China EMAIL
Pseudocode No The paper describes the methodology using textual explanations and diagrams (e.g., Figure 2, Figure 3), and mathematical equations (e.g., equations 1-15), but it does not include any explicitly labeled pseudocode blocks or algorithms.
Open Source Code Yes Code https://github.com/Lintao Peng/SS-UIE
Open Datasets Yes Datasets. We utilize the datasets UIEB (Li et al. 2020) and LSUI (Peng, Zhu, and Bian 2023) to evaluate our model. (...) In addition, to verify the generalization of SS-UIE, we use non-reference benchmarks U45 (Li, Li, and Wang 2019), which contains 45 underwater images for testing.
Dataset Splits Yes The UIEB dataset comprises 890 images with corresponding labels. Out of these, 800 images are allocated for training, and the remaining 90 are designated for testing. The LSUI dataset is randomly partitioned into 3879 images for training and 400 images for testing.
Hardware Specification No The paper mentions 'cheaper computational and memory costs' and includes Figure 1 comparing FLOPs and Parameters, but it does not specify any particular hardware (e.g., GPU/CPU models, memory) used for conducting the experiments or measuring these costs.
Software Dependencies No The paper does not explicitly state the specific software dependencies with their version numbers (e.g., Python version, specific deep learning frameworks like PyTorch or TensorFlow with versions, CUDA versions) used for the implementation or experiments.
Experiment Setup No The paper defines the loss function Ltotal = L1(Igt, Ipred) + λLF W L(Fgt, Fpred) where λ is a weight factor, and describes some architectural details like the gated fusion block and self-adaptive weights wi. However, it does not provide specific hyperparameter values such as the learning rate, batch size, number of training epochs, or the specific optimizer settings used for training.