Towards Perceptual Image Dehazing by Physics-Based Disentanglement and Adversarial Training
Authors: Xitong Yang, Zheng Xu, Jiebo Luo
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on synthetic datasets demonstrate our superior performance compared with the state-of-the-art methods in terms of PSNR, SSIM and CIEDE2000. |
| Researcher Affiliation | Academia | Xitong Yang, Zheng Xu Department of Computer Science University of Maryland College Park, MD 20740 {xyang35, xuzh}@cs.umd.edu Jiebo Luo Department of Computer Science University of Rochester Rochester, NY 14627 jluo@cs.rochester.edu |
| Pseudocode | No | The paper describes the network architecture and components using diagrams and text, but does not provide any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that the source code for their methodology is publicly available. |
| Open Datasets | Yes | We use D-HAZY dataset (Ancuti, Ancuti, and De Vleeschouwster 2016), a public dataset built on the Middlebury (Scharstein et al. 2014) and NYU-Depth (Silberman et al. 2012) datasets. |
| Dataset Splits | Yes | We first randomly split the dataset into two halves (split 1 and 2). We use different combinations of the images for model training, and then test the model on the hazy images in split 2. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper mentions adapting architectures from (Zhu et al. 2017) and using standard data augmentation, but it does not specify any software dependencies with version numbers (e.g., specific deep learning frameworks, Python versions). |
| Experiment Setup | No | The paper mentions that "More details on network architectures and training procedures are presented in the appendix" and that they performed "standard data augmentation techniques including rescaling, random cropping and normalization (Zhu et al. 2017)". However, concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or specific training configurations are not provided in the main text. |