Neighborhood Reconstructing Autoencoders

Authors: Yonghyeon LEE, Hyeokjun Kwon, Frank Park

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments with standard datasets demonstrate that, compared to existing methods, NRAE improves both overfitting and local connectivity in the learned manifold, in some cases by significant margins. Code for NRAE is available at https://github.com/Gabe-YHLee/NRAE-public. Experiments with both synthetic and image data (MNIST, Fashion MNIST, KMNIST, Omniglot, SVHN, CIFAR10, CIFAR100, Celeb A) confirm that overall our method better learns the correct geometry of manifolds, showing improved generalization performance vis-á-vis existing graph-based and other autoencoder regularization methods.
Researcher Affiliation Collaboration Yonghyeon Lee1 Hyeokjun Kwon1 Frank C. Park1,2 Seoul National University1 Saige Research2
Pseudocode No The paper includes a section '2.2 Algorithmic Details' which describes the steps for graph construction, kernel design, and batch sampling. However, these steps are described in prose and do not appear in a formally structured 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes Code for NRAE is available at https://github.com/Gabe-YHLee/NRAE-public.
Open Datasets Yes Extensive experiments with standard datasets demonstrate that, compared to existing methods, NRAE improves both overfitting and local connectivity in the learned manifold, in some cases by significant margins. Experiments with both synthetic and image data (MNIST, Fashion MNIST, KMNIST, Omniglot, SVHN, CIFAR10, CIFAR100, Celeb A) confirm that overall our method better learns the correct geometry of manifolds, showing improved generalization performance vis-á-vis existing graph-based and other autoencoder regularization methods.
Dataset Splits Yes The numbers for the validation and test data are fixed at 10, 000 and 50, 000, respectively.
Hardware Specification No The paper mentions that details about compute and resources used are in the Supplementary Material ('Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See the Supplementary Material.'). However, the main body of the paper does not specify any hardware details such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions that 'implementation details including the hyperparameter tuning strategy' are available in the Supplementary Material. However, the main body of the paper does not list specific software dependencies with version numbers (e.g., 'PyTorch 1.9', 'Python 3.8').
Experiment Setup No The paper states: 'We refer the reader to the Supplementary Material for a description of the network architectures used in the experiments, together with implementation details including the hyperparameter tuning strategy.' The main text does not contain specific hyperparameters or other detailed experimental setup information.