Latent Space Factorisation and Manipulation via Matrix Subspace Projection

Authors: Xiao Li, Chenghua Lin, Ruizhe Li, Chaozheng Wang, Frank Guerin

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the utility of our method for attribute manipulation in autoencoders trained across varied domains, using both human evaluation and automated methods. The quality of generation of our new model (e.g. reconstruction, conditional generation) is highly competitive to a number of strong baselines.
Researcher Affiliation Academia 1Department of Computing Science, University of Aberdeen, UK 2Department of Computer Science, University of Sheffield, UK 3Department of Computer Science, University of Surrey, UK. Correspondence to: Chenghua Lin <c.lin@sheffield.ac.uk>.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The code for our model is available online1. 1Code: https://xiao.ac/proj/msp
Open Datasets Yes We evaluated on the Celeb A dataset (Liu et al., 2015) (202,600 images) and trained one model on all 40 labelled attributes. and In this task, we adopt the E2E corpus (Duˇsek et al., 2019), which contains 50k+ reviews of restaurants...
Dataset Splits No The paper states the use of Celeb A and E2E corpus datasets but does not explicitly provide training, validation, and test dataset splits with specific percentages or counts.
Hardware Specification Yes The model is trained for 1000 epochs (on a Tesla T4 around 12 hours).
Software Dependencies No The paper mentions optimizers and other models but does not provide specific version numbers for software dependencies or libraries (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We used the ADAM optimiser with learning rate = 0.0002, mini-batch size of 256, and images are upsampled to 256 256. and The model is trained for 1000 epochs (on a Tesla T4 around 12 hours).