Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Weakly Supervised Disentanglement by Pairwise Similarities

Authors: Junxiang Chen, Kayhan Batmanghelich3495-3502

AAAI 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that utilizing weak supervision improves the performance of the disentanglement method substantially.
Researcher Affiliation Academia Junxiang Chen, Kayhan Batmanghelich Department of Biomedical Informatics University of Pittsburgh, Pittsburgh, PA 15232, US EMAIL
Pseudocode No The paper provides mathematical formulations and figures representing the model, but no explicit pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/batmanlab/VAE_pairwise.
Open Datasets Yes We evaluate our methods on five datasets: MNIST (Le Cun and Cortes 2010), Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017), Yale Faces (Georghiades, Belhumeur, and Kriegman 2001), 3D chairs (Aubry et al. 2014) and 3D cars (Krause et al. 2013). The details of these datasets are summarized in Table 1.
Dataset Splits Yes To select the hyperparameters for our method, we use 5-fold cross validation on the training data. We plot the mean log-likelihood ( log pθ(X, Y|Z) ) of five validations sets in Figure 9.
Hardware Specification Yes We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.
Software Dependencies No The paper does not provide specific version numbers for software dependencies such as libraries, frameworks, or programming languages.
Experiment Setup Yes To select the hyperparameters for our method, we use 5-fold cross validation on the training data. We choose β that maximizes the log-likelihood for each dataset. In all other experiments, we choose η1 = 1e3 and η2 = 2.