Statistical analysis of stochastic gradient methods for generalized linear models
Authors: Panagiotis Toulis, Edoardo Airoldi, Jason Rennie
ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our set of experiments confirm our theory and more broadly suggest that the implicit procedure can be a competitive choice for fitting large-scale models, especially when robustness is a concern. We illustrate the different aspects of our theory on three separate sets of experiments. |
| Researcher Affiliation | Collaboration | Department of Statistics, Harvard University, 1 Oxford Street, Cambridge, MA 02138, USA. Google, Inc., 1600 Amphitheatre Pkwy, Mountain View, CA 94043 |
| Pseudocode | Yes | Algorithm 1 Implicit learning of canonical GLMs |
| Open Source Code | Yes | The full version of the paper together with the accompanying source code and documentation can be found at the following location: http://www.people.fas.harvard.edu/ptoulis/harvard-homepage/implicit-sgd.html. |
| Open Datasets | Yes | We implement an implicit online learning procedure for a SVM model and compare it to a standard SGD method on the RCV1 benchmark. |
| Dataset Splits | No | The paper mentions 'Test errors' on the RCV1 dataset but does not specify the train/validation/test splits, their percentages, or sample counts, or refer to standard splits with citations. |
| Hardware Specification | No | The paper does not specify any particular hardware components such as GPU or CPU models, memory, or specific computing platforms used for the experiments. |
| Software Dependencies | No | The paper mentions using 'Bottou’s SVM SGD implementation' and 'Our implicit SVM' but does not specify versions for any software, libraries, or frameworks used in the experiments. |
| Experiment Setup | No | The paper mentions learning rate schedules (e.g., an = α/n) and regularization parameters (λ), but it does not provide a comprehensive set of hyperparameters, optimizer settings, or other detailed training configurations necessary for reproducibility. |