Marginalized Denoising Auto-encoders for Nonlinear Representations

Authors: Minmin Chen, Kilian Weinberger, Fei Sha, Yoshua Bengio

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In empirical evaluations we show that it attains 1-2 order-of-magnitude speedup in training time over other competing approaches.
Researcher Affiliation Collaboration Minmin Chen M.CHEN@CRITEO.COM Criteo Kilian Weinberger KILIAN@WUSTL.EDU Washington University in St. Louis Fei Sha FEISHA@USC.EDU University of Southern California Yoshua Bengio Universit e de Montr eal, Canadian Institute for Advanced Research
Pseudocode No The paper describes the algorithms and mathematical derivations but does not include pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement or link indicating that source code for the described methodology is publicly available.
Open Datasets Yes Our datasets consist of the original MNIST dataset (MNIST) for recognizing images of handwritten digits, for the sake of comparison with prior work a subsampled version (basic) and its several variants (Larochelle et al., 2007; Vincent et al., 2010; Rifai et al., 2011b).
Dataset Splits Yes Each dataset is split into three subsets: a training set for pre-training and fine-tuning the parameters, a validation set for choosing the hyper-parameters and a testing set on which the results are reported.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies used in the experiments.
Experiment Setup Yes These include the learning rate for pre-training and fine-tuning (candidate set [0.01, 0.05, 0.1, 0.2]), noise levels in m LDAE, DAE and our method m DAE (candidate set [0.05, 0.1, 0.3, 0.5, 0.7, 0.9, 1.1, 1.3])), and the regularization coefficient in CAE (candidate set [0.01, 0.05, 0.1, 0.3, 0.5, 0.7, 0.9]).