Analogical Image Translation for Fog Generation
Authors: Rui Gong, Dengxin Dai, Yuhua Chen, Wen Li, Danda Pani Paudel, Luc Van Gool1433-1441
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experiments demonstrate the superiority of Analogical GAN over the standard zero-shot image translation methods, when tested for fog generation. The quality of our fogy real images is also validated by the state-of-the-art performance on downstreamed semantic foggy scene understanding. |
| Researcher Affiliation | Academia | 1Computer Vision Lab, ETH Zurich 2VISICS, KU Leuven 3University of Electronic Science and Technology of China |
| Pseudocode | No | The paper does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. Figure 2 shows a system overview, but it is a diagram, not pseudocode. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing its source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We conduct the analogical image translation experiments by regarding Virtual KITTI (Gaidon et al. 2016) as synthetic domain, while Cityscapes (Cordts et al. 2016) as real domain. |
| Dataset Splits | No | The paper mentions training and testing but does not explicitly provide specific details on how the dataset (Virtual KITTI or Cityscapes) was split into training, validation, and test sets, or specific percentages/counts for each split. |
| Hardware Specification | No | The paper does not specify any particular hardware components such as GPU models, CPU types, or memory used for conducting the experiments. |
| Software Dependencies | No | The paper mentions 'Py Torch (Paszke et al. 2017)' and 'The Adam optimizer (Kingma and Ba 2015)' but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | The Adam optimizer (Kingma and Ba 2015) is adopted, the learning rate is fixed to 0.0002 and the batch size is set as 1. The image is resized to 512 256. The weight of the gist adversarial loss is set as 3, the weight of cycle consistency adversarial loss is set as 1, and the weight of rest parts are 10. |