Learning Infinite RBMs with Frank-Wolfe
Authors: Wei Ping, Qiang Liu, Alexander T. Ihler
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we test the performance of our Frank-Wolfe (FW) learning algorithm on two datasets: MNIST [Le Cun et al., 1998] and Caltech101 Silhouettes [Marlin et al., 2010]. |
| Researcher Affiliation | Academia | Wei Ping Qiang Liu Alexander Ihler Computer Science, UC Irvine Computer Science, Dartmouth College {wping,ihler}@ics.uci.edu qliu@cs.dartmouth.edu |
| Pseudocode | Yes | Algorithm 1 Frank-Wolfe Learning Algorithm |
| Open Source Code | No | The paper does not contain any explicit statement about providing open-source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | In this section, we test the performance of our Frank-Wolfe (FW) learning algorithm on two datasets: MNIST [Le Cun et al., 1998] and Caltech101 Silhouettes [Marlin et al., 2010]. The MNIST handwritten digits database contains 60,000 images in the training set and 10,000 test set images... The Caltech101 Silhouettes dataset [Marlin et al., 2010] has 8,671 images with 28 28 binary pixels, where each image represents objects silhouette and has a class label (overall 101 classes). |
| Dataset Splits | Yes | The MNIST handwritten digits database contains 60,000 images in the training set and 10,000 test set images... We binarize the grayscale images by thresholding the pixels at 127, and randomly select 10,000 images from training as the validation set. The Caltech101 Silhouettes dataset [Marlin et al., 2010]... is divided into three subsets: 4,100 examples for training, 2,264 for validation and 2,307 for testing. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU/CPU models, memory, or specific computing environments used for running the experiments. |
| Software Dependencies | No | The paper mentions "CD-10 algorithm" but does not provide specific version numbers for any software dependencies, libraries, or programming languages used (e.g., Python version, PyTorch version). |
| Experiment Setup | Yes | A fixed learning rate is selected from the set {0.05, 0.02, 0.01, 0.005} using the validation set, and the mini-batch size is selected from the set {10, 20, 50, 100, 200}. We use 200 epochs for training on MINIST and 400 epochs on Caltech101. ... A fixed step size η is selected from the set {0.05, 0.02, 0.01, 0.005} using the validation data, and a regularization strength λ is selected from the set {1, 0.5, 0.1, 0.05, 0.01}. We set T = 700 in Algorithm 1. |