Localization, Convexity, and Star Aggregation

Authors: Suhas Vijaykumar

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We show that the offset complexity can be generalized to any loss that satisfies a certain general convexity condition. Further, we show that this condition is closely related to both exponential concavity and self-concordance, unifying apparently disparate results. By a novel geometric argument, many of our bounds translate to improper learning in a non-convex class with Audibert s star algorithm. Thus, the offset complexity provides a versatile analytic tool that covers both convex empirical risk minimization and improper learning under entropy conditions. Applying the method, we recover the optimal rates for proper and improper learning with the p-loss for 1 < p < , and show that improper variants of empirical risk minimization can attain fast rates for logistic regression and other generalized linear models.
Researcher Affiliation Academia Suhas Vijaykumar Statistics Center and Dept. of Economics Massachusetts Institute of Technology Cambridge, MA 02139 suhasv@mit.edu
Pseudocode Yes 1: procedure STAR(S, f) 2: Find ˆx that minimizes f over S 3: Find x that minimizes f over star(ˆx, S) 4: Output x 5: end procedure
Open Source Code No The paper does not provide any explicit statements about open-source code availability or links to code repositories.
Open Datasets No The paper is theoretical and does not use specific datasets for training; it discusses theoretical properties of data and learning.
Dataset Splits No The paper is theoretical and does not specify dataset splits for validation or other purposes.
Hardware Specification No The paper is theoretical and does not describe the hardware used for experiments.
Software Dependencies No The paper is theoretical and does not list any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe an experimental setup with hyperparameters or system-level training settings.