Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Unsupervised Attention-guided Image-to-Image Translation
Authors: Youssef Alami Mejjati, Christian Richardt, James Tompkin, Darren Cosker, Kwang In Kim
NeurIPS 2018 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate qualitatively and quantitatively that our approach attends to relevant regions in the image without requiring supervision, which creates more realistic mappings when compared to those of recent approaches. |
| Researcher Affiliation | Academia | Youssef A. Mejjati University of Bath EMAIL Christian Richardt University of Bath EMAIL James Tompkin Brown University EMAIL Darren Cosker University of Bath EMAIL Kwang In Kim University of Bath EMAIL |
| Pseudocode | Yes | Algorithm 1 summarizes the training procedure for learning FS T ; training FT S is similar. |
| Open Source Code | Yes | Our code is released in the following Github repository: https://github.com/Alami Mejjati/Unsupervised-Attention-guided-Image-to-Image-Translation. |
| Open Datasets | Yes | We use the Apple to Orange (A O) and Horse to Zebra (H Z) datasets provided by Zhu et al. [1], and the Lion to Tiger (L T) dataset obtained from the corresponding classes in the Animals With Attributes (AWA) dataset [28]. |
| Dataset Splits | No | The paper mentions training and testing but does not provide specific details on training/test/validation dataset splits (e.g., percentages or counts) or their methodology. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names like PyTorch, TensorFlow, or Python versions). |
| Experiment Setup | Yes | where we use the loss hyper-parameter λcyc = 10 throughout our experiments.Algorithm 1 summarizes the training procedure... K (number of epochs), λcyc (cycle-consistency weight), α (ADAM learning rate).we first train the discriminators on full images for 30 epochs.which we set to 0.1. |