On Accelerated Perceptrons and Beyond

Authors: Guanghui Wang, Rafael Hanashiro, Etash Kumar Guha, Jacob Abernethy

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we unify these existing results under one framework by showing that they can all be described through the lens of solving min-max problems using modern acceleration techniques, mainly through optimistic online learning. We then show that the proposed framework also leads to improved results for a series of problems beyond the standard Perceptron setting.
Researcher Affiliation Collaboration 1College of Computing, Georgia Tech, Atlanta, GA, USA 2Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA, USA 3Google Research, Atlanta, GA 30309
Pseudocode Yes Algorithm 1 Smooth Perceptron (Soheili & Pena, 2012) ... Algorithm 2 Accelerated Perceptron of Ji et al. (2021) ... Algorithm 3 NAG ... Algorithm 4 Accelerated algorithm for the p-norm perceptron
Open Source Code No The paper does not provide any concrete access information (e.g., repository link, explicit statement of code release, or mention of code in supplementary materials) for the source code of the methodology described.
Open Datasets No The paper mentions 'a set S of n training examples' but does not provide concrete access information (specific link, DOI, repository name, formal citation, or reference to established benchmark datasets) for a publicly available or open dataset.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology).
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper discusses algorithmic parameters but does not provide specific experimental setup details such as concrete hyperparameter values, training configurations, or system-level settings for empirical evaluation.