Fader Networks:Manipulating Images by Sliding Attributes
Authors: Guillaume Lample, Neil Zeghidour, Nicolas Usunier, Antoine Bordes, Ludovic DENOYER, Marc'Aurelio Ranzato
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experiments, 5.1 Experiments on the celeb A dataset, Quantitative evaluation protocol, Quantitative results, 5.2 Experiments on Flowers dataset. We performed a quantitative evaluation of Fader Networks on Mechanical Turk, using Ic GAN as a baseline. |
| Researcher Affiliation | Collaboration | 1Facebook AI Research 2Sorbonne Universités, UPMC Univ Paris 06, UMR 7606, LIP6 3LSCP, ENS, EHESS, CNRS, PSL Research University, INRIA |
| Pseudocode | No | The paper describes the architecture and algorithm in text and mathematical equations but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper refers to a GitHub link for the Ic GAN baseline model (https://github.com/Guim3/Ic GAN) but does not provide a link or statement about open-sourcing the Fader Networks code itself. |
| Open Datasets | Yes | We first present experiments on the celeb A dataset [14], which contains 200, 000 images of celebrity of shape 178 × 218 annotated with 40 attributes. We performed additional experiments on the Oxford-102 dataset, which contains about 9, 000 images of flowers classified into 102 categories [17]. |
| Dataset Splits | Yes | We used the standard training, validation and test split. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments (e.g., GPU/CPU models, memory, or cloud instance types). |
| Software Dependencies | No | The paper mentions "All models were trained with Adam [11]" but does not provide specific software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch versions) needed for replication. |
| Experiment Setup | Yes | All models were trained with Adam [11], using a learning rate of 0.002, β1 = 0.5, and a batch size of 32. We performed data augmentation by flipping horizontally images with a probability 0.5 at each iteration. We initially set λE to 0 and the model is trained like a normal auto-encoder. Then, λE is linearly increased to 0.0001 over the first 500, 000 iterations to slowly encourage the model to produce invariant representations. |