Adversarial Disentanglement with Grouped Observations

Authors: Jozsef Nemeth10243-10250

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results and comparisons on image datasets show that the resulting method can efficiently separate the content and style related attributes and generalizes to unseen data.
Researcher Affiliation Industry Jozsef Nemeth Ultinous, Hungary jnemeth@ultinous.com
Pseudocode Yes The pseudo-code in Algorithm 2 shows the main steps of the proposed algorithm.
Open Source Code Yes The source code of the experiments and supplementary material are available online1. 1https://github.com/jonemeth/aaai20
Open Datasets Yes The MNIST (Le Cun et al. 1998) dataset is composed of only 10 classes of handwritten digits, while there is a wide range of variability in style. The Chairs (Yang et al. 2015) dataset contains rendered images of about one thousand different three-dimensional chair models, but the intra-class variability is very low as the images within a given class differ only by the view of the model. Finally, the VGGFace2 (Cao et al. 2018) face recognition dataset represents high variability in both content and style.
Dataset Splits Yes For that purpose, each of the datasets had been split into three parts. For a given dataset, the first set was used to form the groups for training the models, the classifiers were trained on the second, while the classification performances were evaluated on the last part. We split the set of 50000 training images into two parts. The models were trained on 45000 samples, in case of a given group size we randomly formed 10000 groups from each of the 10 classes. The remaining 5000 images were used to train the classifiers, while the MNIST test set was used for evaluation.
Hardware Specification No The paper does not provide specific hardware details such as CPU, GPU models, or memory used for the experiments. It only mentions 'neural networks' and 'Adam optimizer'.
Software Dependencies No The paper mentions 'Adam optimizer (Kingma and Ba 2015)' but does not provide specific version numbers for any software dependencies or libraries like Python, PyTorch/TensorFlow, or CUDA.
Experiment Setup No The description of the neural networks and the optimization parameters used for the different experiments can be found in the supplementary material.