Mixability made efficient: Fast online multiclass logistic regression
Authors: Rémi Jézéquel, Pierre Gaillard, Alessandro Rudi
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Although GAF is primarily theoretically motivated in a worst-case analysis, here we study its performance on real data sets. We consider three datasets (vehicle, shuttle, and segmentation taken from LIBSVM Data 1) and compare the performance of GAF with two well-used algorithms: Online Gradient Descent (OGD) (Zinkevich, 2003) and Online Newton Step (ONS) (Hazan et al., 2007). The algorithm of Foster et al. (2018) is not considered because of prohibitive computational complexity. Concerning the hyper-parameters, the values suggested by the theory are generally too conservative. We thus choose the best ones in a grid for each algorithm (λ, β [0.01, 0.03, 0.1, 0.3, 1, 3, 10]). |
| Researcher Affiliation | Academia | Rémi Jézéquel INRIA Département d Informatique de l École Normale Supérieure, PSL Research University Paris, France remi.jezequel@inria.fr Pierre Gaillard Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK Grenoble, France pierre.gaillard@inria.fr Alessandro Rudi INRIA Département d Informatique de l École Normale Supérieure, PSL Research University Paris, France alessandro.rudi@inria.fr |
| Pseudocode | Yes | Algorithm 1: Efficient-GAF for K-class logistic regression |
| Open Source Code | No | The paper does not provide an explicit statement about the release of source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | We consider three datasets (vehicle, shuttle, and segmentation taken from LIBSVM Data 1) and compare the performance of GAF with two well-used algorithms: Online Gradient Descent (OGD) (Zinkevich, 2003) and Online Newton Step (ONS) (Hazan et al., 2007). The algorithm of Foster et al. (2018) is not considered because of prohibitive computational complexity. Concerning the hyper-parameters, the values suggested by the theory are generally too conservative. We thus choose the best ones in a grid for each algorithm (λ, β [0.01, 0.03, 0.1, 0.3, 1, 3, 10]). 1https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multiclass.html |
| Dataset Splits | No | The paper mentions using specific datasets but does not provide details on training, validation, or test splits (e.g., percentages or sample counts). It refers to 'averaged losses over time' but not specific data splits. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments (e.g., CPU, GPU models, memory, or cloud instance types). |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the implementation or experiments. |
| Experiment Setup | Yes | Concerning the hyper-parameters, the values suggested by the theory are generally too conservative. We thus choose the best ones in a grid for each algorithm (λ, β [0.01, 0.03, 0.1, 0.3, 1, 3, 10]). |