Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Dual-Path in Dual-Path Network for Single Image Dehazing
Authors: Aiping Yang, Haixin Wang, Zhong Ji, Yanwei Pang, Ling Shao
IJCAI 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that our proposed DPDP-Net achieves competitive performance against the state-of-the-art methods on both synthetic and real-world images. (Abstract) ... In this section, we demonstrate the superiority of the proposed DPDP-Net on the Synthetic and Real-World Image datasets against several state-of-art single image dehazing methods ... (Section 4 Experimental Results) |
| Researcher Affiliation | Collaboration | 1 School of Electrical and Information Engineering, Tianjin University, China 2 Inception Institute of Articial Intelligence, Abu Dhabi, UAE EMAIL, EMAIL |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating the availability of open-source code for the described methodology. |
| Open Datasets | Yes | The training dataset is composed of 400 synthesized hazy images, which are generated with Eq.(1). Among them, 300 indoor images are from the NYU2 depth dataset [Silberman et al., 2012] and 100 outdoor images (most of them have large areas of the sky) are from Image Net dataset [Deng et al., 2009]. (Section 4.1 Experimental Settings) |
| Dataset Splits | No | In addition, a small validation dataset of some additional hazy and ground truth pairs are selected randomly for tracking model performance and empirically determining the parameters of the proposed model. (Section 4.1 Experimental Settings) However, specific percentages or counts for the train/validation/test splits are not provided. |
| Hardware Specification | Yes | The whole experiment is conducted on a PC with an Intel(R) Xeon(R) CPU E5-1607 v3@3.1GHz and an Nvidia Ge Force GTX 1080 Ti GPU. (Section 4.1 Experimental Settings) |
| Software Dependencies | No | The paper mentions general techniques like CNNs and specific functions like ReLU and guide filtering, but it does not specify any software frameworks (e.g., TensorFlow, PyTorch) or libraries with their version numbers. |
| Experiment Setup | Yes | The training is conducted on patches with size 64x64 samples from synthesized hazy images. In total, we sample more than 30,000 patches to train the network. ... the learning rate is set to 0.001 for the ο¬rst 60 epochs, and 0.0001 for the remaining epochs. Stochastic Gradient Descent (SGD) is employed for learning transmission map and atmospheric light with 0.9 momentum and 0.05 decay parameter for training. The batch size is 64. (Section 4.1 Experimental Settings) |