Unsupervised Attention-guided Image-to-Image Translation
Authors: Youssef Alami Mejjati, Christian Richardt, James Tompkin, Darren Cosker, Kwang In Kim
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate qualitatively and quantitatively that our approach attends to relevant regions in the image without requiring supervision, which creates more realistic mappings when compared to those of recent approaches. |
| Researcher Affiliation | Academia | Youssef A. Mejjati University of Bath yam28@bath.ac.uk Christian Richardt University of Bath christian@richardt.name James Tompkin Brown University james_tompkin@brown.edu Darren Cosker University of Bath D.P.Cosker@bath.ac.uk Kwang In Kim University of Bath k.kim@bath.ac.uk |
| Pseudocode | Yes | Algorithm 1 summarizes the training procedure for learning FS T ; training FT S is similar. |
| Open Source Code | Yes | Our code is released in the following Github repository: https://github.com/Alami Mejjati/Unsupervised-Attention-guided-Image-to-Image-Translation. |
| Open Datasets | Yes | We use the Apple to Orange (A O) and Horse to Zebra (H Z) datasets provided by Zhu et al. [1], and the Lion to Tiger (L T) dataset obtained from the corresponding classes in the Animals With Attributes (AWA) dataset [28]. |
| Dataset Splits | No | The paper mentions training and testing but does not provide specific details on training/test/validation dataset splits (e.g., percentages or counts) or their methodology. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names like PyTorch, TensorFlow, or Python versions). |
| Experiment Setup | Yes | where we use the loss hyper-parameter λcyc = 10 throughout our experiments.Algorithm 1 summarizes the training procedure... K (number of epochs), λcyc (cycle-consistency weight), α (ADAM learning rate).we first train the discriminators on full images for 30 epochs.which we set to 0.1. |