Improving representation learning in autoencoders via multidimensional interpolation and dual regularizations
Authors: Sheng Qian, Guanyue Li, Wen-Ming Cao, Cheng Liu, Si Wu, Hau San Wong
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Compared to representative models, our proposed approach has empirically shown that representation learning exhibits better performance on downstream tasks on multiple benchmarks. In our experiments, we evaluate our proposed models on the following datasets: MNIST [Le Cun et al., 1998], SVHN [Netzer et al., 2011], CIFAR-10 [Krizhevsky and Hinton, 2009] and Celeb A [Liu et al., 2015]. |
| Researcher Affiliation | Collaboration | 1Huawei Device Company Limited 2School of Computer Science and Engineering, South China University of Technology 3Department of Computer Science, City University of Hong Kong 4Department of Computer Science, Shantou University |
| Pseudocode | No | The paper describes algorithms through mathematical equations and textual explanations, but it does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | For the purpose of reproduction and extensions, our code is publicly available. 1https://github.com/guanyuelee/midrae |
| Open Datasets | Yes | In our experiments, we evaluate our proposed models on the following datasets: MNIST [Le Cun et al., 1998], SVHN [Netzer et al., 2011], CIFAR-10 [Krizhevsky and Hinton, 2009] and Celeb A [Liu et al., 2015]. |
| Dataset Splits | No | The paper mentions using standard datasets like MNIST, SVHN, CIFAR-10, and Celeb A, which typically have predefined train/validation/test splits. However, it does not explicitly specify the exact percentages or counts for these splits (including validation) within the main text. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow versions) used for the experiments. |
| Experiment Setup | Yes | In addition, the optimal hyperparameters of our models are as follows: ω3 = 0.5 and ω4 = 0.1 for MNIST; ω3 = 0.5 and ω4 = 0.05 for SVHN; ω3 = 0.01 and ω4 = 0.01 for CIFAR-10, and ω3 = 0.1 and ω4 = 0.05 for Celeb A. |