Quantitative Understanding of VAE as a Non-linearly Scaled Isometric Embedding
Authors: Akira Nakagawa, Keizo Kato, Taiji Suzuki
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This section describes three experimental results. First, the results of the toy dataset are examined to validate our theory. Next, the disentanglement analysis for the Celeb A dataset is presented. Finally, an anomaly detection task is evaluated to show the usefulness of data distribution estimation. |
| Researcher Affiliation | Collaboration | 1Fujitsu Limited, Kanagawa, Japan 2Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan 3Center for Advanced Intelligence Project, RIKEN, Tokyo, Japan. |
| Pseudocode | No | The paper does not contain any sections or figures explicitly labeled 'Pseudocode' or 'Algorithm', nor does it present any structured code-like blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing its source code, nor does it include a link to a code repository for the described methodology. |
| Open Datasets | Yes | We use four public datasets : KDDCUP99, Thyroid, Arrhythmia, and KDDCUP-Rev. The details of the datasets and network configurations are given in Appendix H. For a fair comparison with previous works, we follow the setting in Zong et al. (2018). Randomly extracted 50% of the data were assigned to the training and the rest to the testing. Then the model is trained using normal data only. Datasets can be downloaded at https://kdd.ics.uci. edu/ and http://odds.cs.stonybrook.edu. |
| Dataset Splits | Yes | Randomly extracted 50% of the data were assigned to the training and the rest to the testing. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory, or cloud computing specifications) used to run the experiments. |
| Software Dependencies | No | The paper mentions TensorFlow as the implementation framework but does not provide specific version numbers for TensorFlow or any other software dependencies. |
| Experiment Setup | Yes | We train 100 epochs using the Adam optimizer with a learning rate of 0.001. |