The dynamics of representation learning in shallow, non-linear autoencoders

Authors: Maria Refinetti, Sebastian Goldt

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We finally show that our equations accurately describe the generalisation dynamics of non-linear autoencoders trained on realistic datasets such as CIFAR10, thus establishing shallow autoencoders as an instance of the recently observed Gaussian universality.
Researcher Affiliation Academia 1Department of Physics, Ecole Normale Sup erieure, Paris, France 2Ide PHICS laboratory, EPFL, Lausanne, Switzerland 3International School of Advanced Studies (SISSA), Trieste, Italy. Correspondence to: Sebastian Goldt <sgoldt@sissa.it>.
Pseudocode No The paper does not contain a pseudocode block or a section explicitly labeled "Algorithm" or "Pseudocode".
Open Source Code Yes Reproducibility We provide code to solve the dynamical equations of section 3.1 and to reproduce our plots at https://github.com/mariaref/Non Linear Shallow AE.
Open Datasets Yes Finally, we show that our equations accurately describe the generalisation dynamics of non-linear autoencoders trained on realistic datasets such as CIFAR10, thus establishing shallow autoencoders as an instance of the recently observed Gaussian universality. Bottom: example inputs drawn from CIFAR10 (Krizhevsky et al., 2009), a benchmark data set we use for our experiments with realistic data in section 4. We train autoencoders with K = 64 neurons to reconstruct Fashion MNIST images using the vanilla and truncated SGD algorithms until convergence. We show the reconstruction using all the neurons of the networks in the left column of fig. 5(d). from CIFAR10 (Krizhevsky et al., 2009) with crosses.
Dataset Splits No No explicit mention of a "validation" dataset or split was found. The paper primarily discusses training and testing.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were provided.
Software Dependencies No No specific software versions (e.g., Python, PyTorch, TensorFlow versions) were mentioned.
Experiment Setup Yes Parameters: M =3, η =1, D=1000. Parameters: D = 500, η = 1(a, b), M = 1, K = 1. Parameters: D = 1000, K = 5, η = 1. Parameters: η = 1, K = 4 ((a) and (b)), K = 64 ((c) and (d)), bs = 1, P = 60000. Parameters: D = 1024, η = 1. Parameters: D = 1024, K = 5, η = 10 2.