Autoencoding Under Normalization Constraints
Authors: Sangwoong Yoon, Yung-Kyun Noh, Frank Park
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results confirm the efficacy of NAE, both in detecting outliers and in generating indistribution samples. and Section 6 presents experimental results. |
| Researcher Affiliation | Collaboration | 1Department of Mechanical Engineering, Seoul National University, Seoul, Republic of Korea 2Department of Computer Science, Hanyang University, Seoul, Republic of Korea 3Korea Institute of Advanced Studies, Seoul, Republic of Korea 4Saige Research, Seoul, Republic of Korea. |
| Pseudocode | Yes | We also write the process as an algorithm in Appendix. |
| Open Source Code | Yes | Our source code and pre-trained models are publicly available online at https://github.com/swyoon/ normalized-autoencoders. |
| Open Datasets | Yes | An autoencoder trained on MNIST, MNIST hold-out class detection, We test two inlier datasets, CIFAR-10 or Image Net 32 32 (Image Net32). |
| Dataset Splits | No | The paper mentions 'MNIST hold-out class detection' and 'Zero-padded 32 32 MNIST images are used for model selection', but it does not provide specific details on the train/validation/test splits by percentages, sample counts, or explicit references to standard predefined splits for the datasets used. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory, or detailed computer specifications used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency details, such as library or framework names with version numbers (e.g., Python 3.8, PyTorch 1.9). |
| Experiment Setup | Yes | We add the average squared energy of negative samples in a mini-batch to the loss function: L = LNAE +α PB i=1 E(x i)2/B for the batch size B and the hyperparameter α. We set α = 1. and The temperature is optimized by gradient descent. |