Regularized Autoencoders for Isometric Representation Learning
Authors: Yonghyeon Lee, Sangwoong Yoon, MinJun Son, Frank C. Park
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on diverse image and motion capture data confirm that, compared to existing related methods, our geometrically regularized autoencoder produces more isometric representations of the data while incurring only minimal losses in reconstruction accuracy. |
| Researcher Affiliation | Collaboration | 1 Department of Mechanical Engineering, Seoul National University 2 Saige Research |
| Pseudocode | Yes | The pseudocode is available in Appendix B. |
| Open Source Code | Yes | Code is available at https://github.com/Gabe-YHLee/IRVAE-public. |
| Open Datasets | Yes | Unsupervised representation learning methods are trained on Celeb A (Liu et al., 2015), which contains 182,637 training images and 19,962 test images. |
| Dataset Splits | Yes | Dataset: We use MNIST dataset. The training, validation, and test data are 50,000, 10,000, and 10,000, respectively. |
| Hardware Specification | Yes | We use the Ge Force RTX 3090 for GPU resources. |
| Software Dependencies | No | The paper mentions using 'pytorch style pseudocode' in Appendix B, but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We have included experiment settings in detail as much as possible in Appendix D and E such as the number of training/validation/test splits of datasets, preprocessing methods, neural network architectures, and hyper-parameters used in model training (e.g., batch size, number of epochs, learning rate, etc). |