Diving into the shallows: a computational perspective on large-scale shallow learning
Authors: SIYUAN MA, Mikhail Belkin
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 6 Experimental Results |
| Researcher Affiliation | Academia | Siyuan Ma Mikhail Belkin Department of Computer Science and Engineering The Ohio State University {masi, mbelkin}@cse.ohio-state.edu |
| Pseudocode | Yes | Algorithm: Eigen Pro(X, y, k, m, η, τ, M) |
| Open Source Code | Yes | In the second part of the paper we propose Eigen Pro iteration (see http://www.github.com/Eigen Pro for the code) |
| Open Datasets | Yes | Dataset Size Gaussian Laplace Cauchy Eig Pro Pega Eig Pro Pega Eig Pro Pega MNIST 6 104 ... CIFAR-10 5 104 ... SVHN 7 104 ... HINT-S 5 104 ... TIMIT 1 106 ... SUSY 4 106 |
| Dataset Splits | No | The paper mentions "train" and "test" data in tables but does not explicitly provide information on validation splits or methodology. |
| Hardware Specification | Yes | Experiments were run on a workstation with 128GB main memory, two Intel Xeon(R) E5-2620 CPUs, and one GTX Titan X (Maxwell) GPU. |
| Software Dependencies | No | The paper mentions software like Pegasos and Random Fourier Features but does not specify their version numbers or other ancillary software dependencies with versions. |
| Experiment Setup | Yes | For consistent comparison, all iterative methods use mini-batch of size m = 256. Eigen Pro preconditioner is constructed using the top k = 160 eigenvectors of a subsampled dataset of size M = 4800. For Eigen Pro-RF, we set the damping factor τ = 1/4. For primal Eigen Pro τ = 1. |