Multilinear Latent Conditioning for Generating Unseen Attribute Combinations

Authors: Markos Georgopoulos, Grigorios Chrysos, Maja Pantic, Yannis Panagakis

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We implement two variants of our model and demonstrate their efficacy on MNIST, Fashion-MNIST and Celeb A. Altogether, we design a novel conditioning framework that can be used with any architecture to synthesize unseen attribute combinations.
Researcher Affiliation Collaboration 1Department of Computing, Imperial College London, United Kingdom 2Department of Informatics and Telecommunications, University of Athens, Greece.
Pseudocode No The paper describes methods using mathematical equations and prose, but no structured pseudocode or algorithm blocks are present.
Open Source Code No All models were implemented in Pytorch (Paszke et al., 2017) and Tensorly (Kossaifiet al., 2019). No explicit statement about releasing their own code or a link to it is provided.
Open Datasets Yes To evaluate our model on multi-attribute conditional image generation, we perform experiments on the MNIST (Le Cun et al., 1998), Fashion-MNIST (Xiao et al., 2017) and Celeb A (Liu et al., 2015) datasets.
Dataset Splits Yes The MNIST dataset consists of 60k training images and 10k test images of handwritten digits. Fashion-MNIST consists of 60k training and 10k test images.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies Yes All models were implemented in Pytorch (Paszke et al., 2017) and Tensorly (Kossaifiet al., 2019).
Experiment Setup Yes For the experiments on MNIST and Fashion-MNIST the encoder and decoder networks have 4 layers, while the networks for Celeb A have 5 layers. All label decoders are affine transformations. We set β 1 for all experiments, except for Celeb A where we set β 10. For fair comparison we train all models for 50 epochs using the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 0.0005.