Robust Single Image Dehazing Based on Consistent and Contrast-Assisted Reconstruction
Authors: De Cheng, Yan Li, Dingwen Zhang, Nannan Wang, Xinbo Gao, Jiande Sun
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results on two synthetic and three real-world datasets demonstrate that our method significantly surpasses the state-of-the-art approaches. |
| Researcher Affiliation | Academia | De Cheng1 , Yan Li2 , Dingwen Zhang3,4 , Nannan Wang1 , Xinbo Gao5 , Jiande Sun2 1Xidian University 2Shandong Normal University 3Northwestern Polytechnical University 4Institute of Artificial Intelligence, Hefei Comprehensive National Science Center 5Chongqing University of Posts and Telecommunications |
| Pseudocode | No | The paper does not contain any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | The widely used synthetic dataset: RESIDE[Li et al., 2018b], contain two subsets, i.e., Indoor Training Set (ITS), and Outdoor Training Set (OTS). ITS and OTS are used for training, and they have corresponding testing dataset, namely, Synthetic Objective Testing Set (SOTS)... We also evaluate the proposed model on four popular real-world datasets: NTIRE 2018 image dehazing indoor dataset (referred to as I-Haze) [Ancuti et al., 2018b], NTIRE 2018 image dehazing outdoor dataset (O-Haze) [Ancuti et al., 2018a], and NTIRE 2019 dense image dehazing dataset (Dense-Haze) [Ancuti et al., 2019]. |
| Dataset Splits | No | The paper mentions training and testing datasets (ITS, OTS, SOTS) but does not explicitly specify validation dataset splits or general data partitioning percentages (e.g., 80/10/10). |
| Hardware Specification | Yes | We implement our method based on Py Torch with NVIDIA RTX 2080Ti GPUs. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'VGG-19' (a model), but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | In the training process, we randomly crop 240 × 240 image patches as input and adopt Adam optimizer for optimization. The learning rate is initially set to 1 × 10−4 and is adjusted using the cosine annealing strategy [He et al., 2019]. We follow [Wu et al., 2021] to select the features of the 1st, 3rd, 5th, 9th and 13th layers from the fixed pre-trained VGG-19 [Simonyan and Zisserman, 2014] to calculate the L1 distance in Eq.(2), and their corresponding weight factors ωm are set as 1/32, 1/16, 1/8, 1/4 and 1, respectively. The number of negative examples used in Eq. 1 is set to K = 5, and parameter τ = 0.5. The hyper-parameters λ1 and λ2 in Eq. 5 is set to 1.0 and 10.0, respectively. |