Scheduled denoising autoencoders

Authors: Krzysztof Geras and Charles Sutton

ICLR 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimentally, we find on both image and text data that scheduled denoising autoencoders learn better representations than standard denoising autoencoders, as measured by the features performance on a supervised task. On both classification tasks, the representation from Sche DA yields lower test error than that from a denoising autoencoder trained at the best single noise level.
Researcher Affiliation Academia Krzysztof J. Geras School of Informatics University of Edinburgh k.j.geras@sms.ed.ac.uk Charles Sutton School of Informatics University of Edinburgh csutton@inf.ed.ac.uk
Pseudocode Yes while θ not converged do Take a stochastic gradient step on (1), using noise level ν0. end while for t in 1, . . . , T do νt := νt 1 ν for K steps do Take a stochastic gradient step on (1), using noise level νt. end for end for
Open Source Code No The paper mentions implementing experiments using 'Theano library (Bergstra et al., 2010)' and 'LIBLINEAR (Fan et al., 2008)', but does not provide a link or an explicit statement about releasing its own open-source code for the methodology described.
Open Datasets Yes We use the CIFAR-10 (Krizhevsky, 2009) data set for experiments with vision data. ... We also evaluate our idea on a data set of product reviews from Amazon (Blitzer et al., 2007)...
Dataset Splits Yes There are 50000 training and validation images and 10000 test images. ... We divide the training and validation set into 45000 training instances and 5000 validation instances.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU or CPU models. It only mentions 'Theano: a CPU and GPU math expression compiler' when referring to the software library used.
Software Dependencies No The paper mentions using 'Theano library (Bergstra et al., 2010)' and 'L2-regularised logistic regression implemented in LIBLINEAR (Fan et al., 2008)', but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes We try all combinations of the following values of the parameters: noise level {0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.05}, learning rate {0.002, 0.01, 0.05}, number of training epochs {100, 200, . . . , 2000}. ... We use the learning rate of 0.01 for this stage...