Discovering Interpretable Representations for Both Deep Generative and Discriminative Models

Authors: Tameem Adel, Zoubin Ghahramani, Adrian Weller

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we achieve state-of-the-art results on three datasets using the two proposed algorithms.
Researcher Affiliation Collaboration 1University of Cambridge, UK 2Leverhulme CFI, Cambridge, UK 3Uber AI Labs, USA 4The Alan Turing Institute, UK.
Pseudocode Yes The key steps of the algorithm are shown in Algorithm 1 in the Appendix.
Open Source Code No The paper does not explicitly state that source code for the described methodology is publicly available, nor does it provide a direct link to a code repository.
Open Datasets Yes We qualitatively and quantitatively evaluate the proposed frameworks on three datasets: MNIST, SVHN and Chairs. [...] Side information used with a few of the MNIST images are the digit labels and thickness. Side information for SVHN is the lighting condition and saturation degree, and it comes in the form of azimuth and width for the 3D Chairs data.
Dataset Splits No The paper mentions 'training size' and 'test set' but does not provide explicit details about a validation set split or how data was partitioned for validation purposes.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper does not specify the version numbers of any software dependencies used in the experiments.
Experiment Setup No The paper states 'Details of the datasets and experiments are provided in Sections 10 and 12 of the Appendix, respectively,' indicating that explicit setup details like hyperparameters are not in the main text.