Semi-supervised Learning with Ladder Networks
Authors: Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, Tapani Raiko
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We ran experiments both with the MNIST and CIFAR-10 datasets, where we attached the decoder both to fully-connected MLP networks and to convolutional neural networks. |
| Researcher Affiliation | Collaboration | Antti Rasmus and Harri Valpola The Curious AI Company, Finland; Mikko Honkala Nokia Labs, Finland; Mathias Berglund and Tapani Raiko Aalto University, Finland & The Curious AI Company, Finland |
| Pseudocode | Yes | Algorithm 1 Calculation of the output y and cost function C of the Ladder network |
| Open Source Code | Yes | The source code for all the experiments is available at https://github.com/arasmus/ladder. |
| Open Datasets | Yes | We ran experiments both with the MNIST and CIFAR-10 datasets |
| Dataset Splits | Yes | For evaluating semi-supervised learning, we randomly split the 60 000 training samples into 10 000sample validation set and used M = 50 000 samples as the training set. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are provided. The paper only mentions 'computational resources provided by the Aalto Science-IT project'. |
| Software Dependencies | No | The software for the simulations for this paper was based on Theano [32] and Blocks [33]. No specific version numbers for these software dependencies are provided. |
| Experiment Setup | Yes | We used the Adam optimization algorithm [14]. The initial learning rate was 0.002 and it was decreased linearly to zero during a final annealing phase. The minibatch size was 100. |