Self-Supervised Enhancement of Latent Discovery in GANs

Authors: Adarsh Kappiyath, Silpa Vadakkeeveetil Sreelatha, S. Sumitra7078-7086

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Qualitative and quantitative evaluation of the discovered directions demonstrates that our proposed method significantly improves disentanglement in various datasets. We also show that the learned SRE can be used to perform Attribute-based image retrieval task without further training.
Researcher Affiliation Collaboration 1 Flytxt Mobile Solutions, Trivandrum, India 2 TCS Research, Pune, India 3 Indian Institute of Space Science and Technology, Trivandrum, India
Pseudocode No The paper describes the training scheme with mathematical formulas and textual descriptions but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing the code for the work described, nor does it provide a direct link to a source-code repository for their method.
Open Datasets Yes Celeb A-HQ (Karras et al. 2018) consist of 30,000, 1024 1024 resolution images of Celebrity faces. Anime Faces dataset (Jin et al. 2017) consist of 64 64 resolution face images of Anime characters. LSUN-Cars (Yu et al. 2015) consist of 512 512 resolution images of cars. 3D Shapes (Burgess and Kim 2018) containing 480,000 images of 64 64 resolution synthetic images with 6 factors of variation.
Dataset Splits No The paper mentions using specific datasets and describes batch sizes for training but does not provide specific percentages or counts for training, validation, or test dataset splits needed for reproduction.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory amounts used for running the experiments.
Software Dependencies No The paper mentions using 'Res Net-18 model' and 'Adam optimizer' but does not specify any software libraries or frameworks with their version numbers (e.g., PyTorch, TensorFlow, or specific Python versions) that would be needed for replication.
Experiment Setup Yes Number of iterations : We set number of iterations to be 6000 for 3DShapes and 20000 for all other datasets. Optimization : We use Adam optimizer to optimize both D and SRE. The learning rate is set to 0.0001. Batch size is 64 for 3DShapes and 8 in the case of Celeb A-HQ dataset. For all other datasets, batch size is set to 16.