SelfPromer: Self-Prompt Dehazing Transformers with Depth-Consistency

Authors: Cong Wang, Jinshan Pan, Wanyu Lin, Jiangxin Dong, Wei Wang, Xiao-Ming Wu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our Self Promer performs favorably against the state-of-the-art approaches on both synthetic and real-world datasets in terms of perception metrics including NIQE, PI, and PIQE.
Researcher Affiliation Academia 1Department of Computing, The Hong Kong Polytechnic University 2School of Computer Science and Engineering, Nanjing University of Science and Technology 3International School of Information Science and Engineering, Dalian University of Technology
Pseudocode No The paper provides architectural diagrams and mathematical equations but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes The source codes will be made available at https://github.com/supersupercong/Self Promer.
Open Datasets Yes Following the protocol of (Yang et al. 2022), we use the RESIDE ITS (Li et al. 2019) as our training dataset and the SOTS-indoor (Li et al. 2019) and SOTS-outdoor (Li et al. 2019) as the testing datasets.
Dataset Splits No The paper mentions training and testing datasets (RESIDE ITS, SOTS-indoor, SOTS-outdoor) but does not specify validation splits in terms of percentages or sample counts.
Hardware Specification No The paper does not specify the exact GPU models, CPU models, or other detailed hardware specifications used for running the experiments.
Software Dependencies No The paper states 'Our implementation is based on the Py Torch' but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes We use 10 PTBs, i.e., l = 10, in our model. We crop an image patch of 256 256 pixels. The batch size is 10. We use ADAM (Kingma and Ba 2015) with default parameters as the optimizer. The initial learning rate is 0.0001 and is divided by 2 at 160K, 320K, and 400K iterations. The model training terminates after 500K iterations. The weight parameters λcode, λper, λadv, and λssim are empirically set as 1, 1, 0.1, and 0.5.