Associative Variational Auto-Encoder with Distributed Latent Spaces and Associators
Authors: Dae Ung Jo, ByeongJu Lee, Jongwon Choi, Haanju Yoo, Jin Young Choi11197-11204
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through experiments, the effectiveness of the proposed structure is validated on various datasets including visual and auditory data. In experiments, the effectiveness and performance are evaluated through comparison with the existing methods and self-analysis using various datasets including voice and visual data. |
| Researcher Affiliation | Collaboration | Dae Ung Jo,1 Byeong Ju Lee,1 Jongwon Choi,2 Haanju Yoo,3 Jin Young Choi1 {mardaewoon, adolys, jychoi}@snu.ac.kr, {jw17.choi, haanju.yoo}@samsung.com 1Department of ECE, ASRI, Seoul National University, Korea 2Samsung SDS, Korea, 3Samsung Research, Korea |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | Google Speech Commands (GSC) (Warden 2018), German Traffic Sign Recognition Benchmark (GTSRB) (Stallkamp et al. 2011), MNIST (Le Cun et al. 1998), SVHN (Netzer et al. 2011), Fashion-MNIST (F-MNIST) (Xiao, Rasul, and Vollgraf 2017), Rendered Hand pose Dataset (RHD) (Zimmermann and Brox 2017). |
| Dataset Splits | Yes | We randomly divided the original dataset into training, validation and test sets at the ratio of 8:1:1. (for GSC). The dataset contains 60k and 10k samples for the training set and testing set, respectively. (for MNIST). The dataset contains 73257 and 26032 samples for the training set and testing set, respectively. (for SVHN). The dataset contains 41258 training and 2728 testing samples. (for RHD). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment. |
| Experiment Setup | No | The paper states 'The implementation details for network architectures are provided in Appendix C of the supplementary document.' however, specific numerical values for hyperparameters or system-level training settings are not provided within the main text. |