DehazeGAN: When Image Dehazing Meets Differential Programming
Authors: Hongyuan Zhu, Xi Peng, Vijay Chandrasekhar, Liyuan Li, Joo-Hwee Lim
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on synthetic and realistic data show that our method outperforms state-of-the-art methods in terms of PSNR, SSIM, and subjective visual quality. |
| Researcher Affiliation | Academia | Hongyuan Zhu1, Xi Peng2 , Vijay Chandrasekhar1, Liyuan Li1, Joo-Hwee Lim1 1 Institute for Infocomm Research, A*STAR, Singapore 2 College of Computer Science, Sichuan University, China {zhuh, vijay, lyli, joohwee}@i2r.a-star.edu.sg, pangsaai@gmail.com |
| Pseudocode | No | No explicit pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | No explicit statement about providing open-source code or a link to a repository was found. |
| Open Datasets | Yes | The dataset is synthesized using the indoor images from the SUN-RGBD dataset [Song et al., 2015], NYU-Depth dataset [Silberman et al., 2012] and natural images from the COCO dataset [Lin et al., 2014]. |
| Dataset Splits | Yes | After generating the hazy images, we randomly choose 85% data for training, 10% data for validation, and a small number of test images to form the indoor and outdoor subsets. |
| Hardware Specification | Yes | The entire network is trained on a Nvidia Titan X GPU in PyTorch. |
| Software Dependencies | No | The paper mentions 'PyTorch' and 'ADAM' but does not specify their version numbers or any other software dependencies with versions. |
| Experiment Setup | Yes | For training, we employ the ADAM [Kingma and Ba, 2015] optimizer with a learning rate of 0.002 and a batch size of eight. We set γ = 10 4 through the cross-validation. |