RandProx: Primal-Dual Optimization Algorithms with Randomized Proximal Updates

Authors: Laurent Condat, Peter Richtárik

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical The paper focuses on proposing a new primal-dual optimization algorithm (Rand Prox), deriving its linear convergence properties under strong convexity, proving new theoretical results even for deterministic cases, and showing how it recovers or generalizes existing randomized algorithms. The entire paper is dedicated to mathematical derivations, proofs, and theoretical complexity analysis (e.g., Big O notation for iteration and communication complexity) without any mention of empirical studies, datasets, or performance metrics from actual experiments.
Researcher Affiliation Academia Laurent Condat & Peter Richtárik Visual Computing Center King Abdullah University of Science and Technology (KAUST) Thuwal, Kingdom of Saudi Arabia
Pseudocode Yes Algorithm 1 PDDY algorithm (Salim et al., 2022b); Algorithm 2 Rand Prox [new]
Open Source Code No The paper includes a contact link 'https://lcondat.github.io/' which points to a personal website, but there is no explicit statement or direct link in the paper confirming the release of the source code for the methodology described in this paper.
Open Datasets No This paper is theoretical and does not involve empirical experiments with datasets. Thus, no training dataset information is provided.
Dataset Splits No This paper is theoretical and does not involve empirical experiments with datasets. Therefore, no dataset split information (training, validation, test) is provided.
Hardware Specification No This paper is purely theoretical and does not include any empirical experiments. Consequently, no hardware specifications for running experiments are mentioned.
Software Dependencies No This paper is purely theoretical and does not include any empirical experiments. Therefore, no specific software dependencies with version numbers are mentioned.
Experiment Setup No This paper is purely theoretical and does not include any empirical experiments. Consequently, no experimental setup details, hyperparameters, or training configurations are provided.