Online and Stochastic Gradient Methods for Non-decomposable Loss Functions
Authors: Purushottam Kar, Harikrishna Narasimhan, Prateek Jain
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We then use extensive experimentation on real life and benchmark datasets to establish that our method can be orders of magnitude faster than a recently proposed cutting plane method. |
| Researcher Affiliation | Collaboration | Microsoft Research, INDIA Indian Institute of Science, Bangalore, INDIA |
| Pseudocode | Yes | Algorithm 1 1PMB: Single-Pass with Mini-batches |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | We used several data sets for our experiments [...] and the remaining datasets are taken from the UCI repository [22]. [...] [22] A. Frank and Arthur Asuncion. The UCI Machine Learning Repository. http://archive.ics.uci.edu/ml, 2010. |
| Dataset Splits | Yes | We used 70% of the data set for training and the remaining for testing, with the results averaged over 5 random train-test splits. Tunable parameters such as step length scale were chosen using a small validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9') required to replicate the experiment. |
| Experiment Setup | Yes | The epoch lengths/buffer sizes were set to 500 in all experiments. |