Learning Latent Superstructures in Variational Autoencoders for Deep Multidimensional Clustering
Authors: Xiaopeng Li, Zhourong Chen, Leonard K. M. Poon, Nevin L. Zhang
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 4, we present the empirical results. The rest of the paper is organized as follows. The related works are reviewed in Section 2. We introduce the proposed method and learning algorithms in Section 3. In Section 4, we present the empirical results. The conclusion is given in Section 5. |
| Researcher Affiliation | Academia | Xiaopeng Li1, Zhourong Chen1, Leonard K. M. Poon2 and Nevin L. Zhang1 1 Department of Computer Science and Engineering The Hong Kong University of Science and Technology 2 Department of Mathematics & Information Technology The Education University of Hong Kong {xlibo,zchenbb,lzhang}@cse.ust.hk, kmpoon@eduhk.hk |
| Pseudocode | Yes | Algorithm 1 Learning Latent Tree Variational Autoencoder |
| Open Source Code | No | The paper does not contain any explicit statement about open-source code release or provide a link to a code repository for the described methodology. |
| Open Datasets | Yes | The datasets include MNIST, STL-10, Reuters (Xie et al., 2016; Jiang et al., 2017) and the Heterogeneity Human Activity Recognition (HHAR) dataset. |
| Dataset Splits | No | The paper does not provide specific details on training, validation, and test dataset splits (e.g., percentages or exact counts), nor does it reference predefined splits with citations for these specific datasets. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer' but does not specify version numbers for any libraries, frameworks, or development environments used for the experiments. |
| Experiment Setup | Yes | For the proposed LTVAE, we use Adam optimzer (Kingma & Ba, 2015) with initial learning rate of 0.001 and mini-batch size of 128. For Stepwise EM, we set the learning rate to be 0.01. As in Algorithm 1, we set E = 5, i.e. we update the latent tree model every 5 epochs. When optimizing the candidate models during structure search, we perform 10 random restarts and train with EM for 200 iterations. |