Learning Disentangled Representation with Pairwise Independence

Authors: Zejian Li, Yongchuan Tang, Wei Li, Yongxing He4245-4252

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our proposed method gives competitive performances as compared with other state-of-the-art methods.
Researcher Affiliation Academia Zejian Li,1 Yongchuan Tang,1,2 Wei Li,1 Yongxing He1 1College of Computer Science, Zhejiang University, Hangzhou 310027, China 2Zhejiang Lab, Hangzhou 310027, China {zejianlee, yctang, liwei 2014, heyongxing}@zju.edu.cn
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Our source code is available on https://github.com/Zejian Li/Pairwise-Indepence Autoencoder.
Open Datasets Yes The experiments are conducted on several image datasets, including MNIST (Le Cun et al. 1998), Fashion MNIST (Xiao, Rasul, and Vollgraf 2017), Celeb A (Liu et al. 2015), Flower (Nilsback and Zisserman 2008), CUB (Wah et al. 2011), Chairs (Aubry et al. 2014) and CIFAR10 (Krizhevsky, Nair, and Hinton 2009).
Dataset Splits Yes We conduct the CCA analysis within a tenfold cross validation and display the average performances.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU types, memory amounts) used for running the experiments.
Software Dependencies No The paper states "the proposed algorithms are implemented with Py Torch (Paszke et al. 2017)" but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes The latent dimension of z is set as 16 in MNIST and Fashion MNIST, and 64 in other datasets. The network architecture is designed according to DCGAN (Radford, Metz, and Chintala 2015). We use Adam optimizer (Kingma and Ba 2014) with a learning rate of 0.0001 and a momentum of 0.5. The batch size is 64.