Unifying Width-Reduced Methods for Quasi-Self-Concordant Optimization

Authors: Deeksha Adil, Brian Bullins, Sushant Sachdeva

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We provide several algorithms for constrained optimization of a large class of convex problems... As our main contribution, we initiate a new direction of study by presenting the first unified approach to achieving m1/3-type rates. Notably, our method goes beyond these previously considered problems to more broadly capture quasi-selfconcordant losses... We first present in Section 3 a width-reduced method for obtaining a crude approximation to (1) for quasi-self-concordant f . At a high level, our algorithm returns an approximate solution x that both satisfies the linear constraints and is bounded in 1-norm by O(R), where R is a bound on the norm of the optimal solution. Our algorithm is based on combining a multiplicative weight update (MWU) scheme with width reduction. The paper focuses on theoretical algorithm design, mathematical proofs, and convergence rate analysis without empirical evaluation.
Researcher Affiliation Collaboration Deeksha Adil University of Toronto deeksha@cs.toronto.edu Brian Bullins TTI Chicago bbullins.ttic.edu Sushant Sachdeva University of Toronto sachdeva@cs.toronto.edu
Pseudocode Yes Algorithm 1 Width-Reduced Algorithm for M-q.s.c. Functions, Algorithm 2 Boosting to -approximation, Algorithm 3
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper is theoretical and does not describe or use any datasets for training or evaluation.
Dataset Splits No The paper is theoretical and does not include any information about dataset splits (training, validation, or test).
Hardware Specification No The paper is theoretical and does not describe any experimental setup or the hardware used for computations.
Software Dependencies No The paper is theoretical and does not list any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe an experimental setup, hyperparameters, or system-level training settings.