Margin-Independent Online Multiclass Learning via Convex Geometry

Authors: Guru Guruganesh, Allen Liu, Jon Schneider, Joshua Wang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We prove the following result: Theorem 1 (Restatement of Corollary 3.1). There exists an efficient algorithm for learning a linear classifier that incurs a total loss of at most O(d log d). We complement this result by showing that learning general convex sets requires an almost linear loss per query. Our results build off of regret guarantees for the geometric problem of contextual search.
Researcher Affiliation Collaboration Guru Guruganesh Google Research gurug@google.com Allen Liu MIT cliu568@mit.edu Jon Schneider Google Research jschnei@google.com Joshua Wang Google Research joshuawang@google.com
Pseudocode No No pseudocode or algorithm blocks are presented in the provided paper text.
Open Source Code No The paper's checklist explicitly states "N/A" for including code or instructions to reproduce experimental results.
Open Datasets No The paper is theoretical and does not describe or use any datasets for training. The checklist marks data-related questions as N/A.
Dataset Splits No The paper is theoretical and does not perform experiments that would require dataset splits for validation. The checklist marks data-related questions as N/A.
Hardware Specification No The paper is theoretical and does not describe any experimental hardware specifications. The checklist marks hardware-related questions as N/A.
Software Dependencies No The paper is theoretical and does not specify software dependencies with version numbers for experimental reproducibility. The checklist marks software-related questions as N/A.
Experiment Setup No The paper is theoretical and does not describe any experimental setup details such as hyperparameters or training configurations. The checklist marks experiment-related questions as N/A.