Towards Building A Group-based Unsupervised Representation Disentanglement Framework
Authors: Tao Yang, Xuanchi Ren, Yuwang Wang, Wenjun Zeng, Nanning Zheng
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimentally, we train 1800 models covering the most prominent VAE-based methods on five datasets to verify the effectiveness of our theoretical framework. Compared to the original VAE-based methods, these Groupified VAEs consistently achieve better mean performance with smaller variances. |
| Researcher Affiliation | Collaboration | Yang Tao1 , Xuanchi Ren2 , Yuwang Wang3 , Wenjun Zeng4 , Nanning Zheng1 1Xi’an Jiaotong University, 2HKUST, , 3Microsoft Research Asia, 4EIT |
| Pseudocode | No | No pseudocode or algorithm block was explicitly labeled or formatted as such. |
| Open Source Code | No | The paper references official implementations of *other* methods (e.g., Control VAE and RGr VAE) with GitHub links, but does not state that its *own* proposed method's source code is publicly available. |
| Open Datasets | Yes | To evaluate our method, we consider several datasets: d Sprites (Higgins et al., 2017), Shapes3D (Kim & Mnih, 2018), Cars3D (Reed et al., 2015), and the variants of d Sprites introduced by Locatello et al. (Locatello et al., 2019b): Color-d Sprites and Noisy-d Sprites. |
| Dataset Splits | No | The paper states that 'In all the experiments, we resize the images to 64x64 resolution' and lists datasets, but does not provide specific train/validation/test split percentages or counts. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU types, or cloud instance specifications used for running experiments. |
| Software Dependencies | No | The paper mentions 'implemented by Pytorch Paszke et al. (2017)' but does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We run using different hyperparameters and random seeds for every VAE-based model implemented by Pytorch Paszke et al. (2017). As shown in Table 4, for β-VAE, we assign 3 choices for β and 10 random seeds for both the Original and Groupified VAEs: 3x10x2 = 60 settings for each dataset. Similarly, we also assign 60 settings for Factor VAE and β-TCVAE. For Anneal VAE, we assign three choices for C and 3 choices for the start and end pair, also assign 10 random seeds. In summary, for all 5 datasets, we run (((3x10x2)x3) + 3x3x10x2)x5 = 1800 models. For other hyperparameters, please refer to Table 5 (b). |