Online Learning for Structured Loss Spaces
Authors: Siddharth Barman, Aditya Gopalan, Aadirupa Saha
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We derive a regret bound for a general version of the online mirror descent (OMD) algorithm that uses a combination of regularizers, each adapted to the constituent atomic norms. The general result recovers standard OMD regret bounds, and yields regret bounds for new structured settings where the loss vectors are (i) noisy versions of vectors from a low-dimensional subspace, (ii) sparse vectors corrupted with noise, and (iii) sparse perturbations of low-rank vectors. For the problem of online learning with structured losses, we also show lower bounds on regret in terms of rank and sparsity of the loss vectors, which implies lower bounds for the above additive loss settings as well. |
| Researcher Affiliation | Academia | Siddharth Barman, Aditya Gopalan, Aadirupa Saha Indian Institute of Science Bangalore 560012 {barman, aditya, aadirupa}@iisc.ac.in |
| Pseudocode | Yes | Algorithm 1 Online Mirror Descent (OMD) |
| Open Source Code | No | The paper provides a link to a full version of the paper on arXiv, but does not provide concrete access to source code for the described methodology. |
| Open Datasets | No | This is a theoretical paper and does not mention using publicly available datasets for training or evaluation. |
| Dataset Splits | No | This is a theoretical paper and does not involve experimental validation on data with specified splits. |
| Hardware Specification | No | This is a theoretical paper that does not involve experimental setup requiring hardware specifications. |
| Software Dependencies | No | This is a theoretical paper that does not detail specific software dependencies with version numbers for experimental reproducibility. |
| Experiment Setup | No | This is a theoretical paper and does not provide specific experimental setup details, hyperparameters, or training configurations. |