Feature Dense Relevance Network for Single Image Dehazing

Authors: Yun Liang, Enze Huang, Zifeng Zhang, Zhuo Su, Dong Wang

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The extensive experiments on several synthetic and real world datasets demonstrate that our network surpasses most of the state-of-the-art methods.
Researcher Affiliation Academia 1 Guangzhou Key Laboratory of Intelligent Agriculture, College of Mathematics and Informatics, South China Agricultural University 2 School of Computer Science and Engineering, Sun Yat-sen University 3 Research Institute of Sun Yat-sen University in Shenzhen
Pseudocode No The paper describes the network architecture and provides mathematical formulas, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes RESIDE [Li et al., 2018] includes synthetic and real-world hazy/clear images which are collected indoor and outdoor. Its Indoor Training Set (ITS) contains 1,399 clean images and 13,990 hazy images generated by different atmosphere light and transmission maps. NTIRE2018/2019/2020. The O-Haze [Ancuti et al., 2018], Dense-Haze [Ancuti et al., 2019] and NH-Haze [Ancuti et al., 2020] are from NTIRE 2018, 2019 and 2020 respectively.
Dataset Splits No The paper specifies training and testing sets (ITS, SOTS, O-Haze, Dense-Haze, NH-Haze) but does not explicitly mention using a separate validation split for hyperparameter tuning or model selection.
Hardware Specification Yes Our network is trained by RTX 3090.
Software Dependencies No The paper mentions using the Adam optimizer but does not specify any software libraries or frameworks (e.g., PyTorch, TensorFlow) with their version numbers.
Experiment Setup Yes The initial learning rate is set as 0.0001 and we adopt the cosine annealing strategy [Qin et al., 2019] to adjust the learning rate. The Adam optimizer is used whose betas parameter is remained default. Besides, for data augmentation, we use 240 240 window to randomly cut the training images and rotate them randomly. According to [Li et al., 2019], we train the network with the settings: γ1=0.03, γ2=0.03, γ3=0.02.