Temporal Variability in Implicit Online Learning
Authors: Nicolò Campolongo, Francesco Orabona
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we validate our theoretical findings on classification and regression datasets. |
| Researcher Affiliation | Academia | Nicol o Campolongo Universit a di Milano nicolo.campolongo@unimi.it Francesco Orabona Boston University francesco@orabona.com |
| Pseudocode | Yes | Algorithm 1 Implicit Online Mirror Descent (IOMD) and Algorithm 2 Ada Implicit |
| Open Source Code | No | The paper does not provide explicit statements or links for the open-source code of the described methodology. |
| Open Datasets | Yes | We used datasets from the LIBSVM library [6]. |
| Dataset Splits | No | The paper mentions using datasets from the LIBSVM library and that 'Details about the datasets can be found in Appendix D.', but the main text and Appendix D do not specify training/validation/test dataset splits. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers. |
| Experiment Setup | Yes | We set β = 1 in all algorithms. ... We consider values of β in [2 20, 220] with a grid containing 41 points. ... For classification tasks we use the hinge loss, while for regression tasks we use the absolute loss. In both cases, we adopt the squared L2 function for ψ. ... Before running the algorithms, we preprocess the data by dividing each feature by its maximum absolute value so that all the values are in the range [ 1, 1], then we add a bias term. |