Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Dueling Convex Optimization with General Preferences
Authors: Aadirupa Saha, Tomer Koren, Yishay Mansour
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our main contribution is an efficient algorithm with convergence rate of O(ϵ 4p) for smooth convex functions, and an optimal rate of e O(ϵ 2p) when the objective is both smooth and strongly convex, where p is the minimal degree (with a non-zero coefficient) in the transfer s series expansion about the origin. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science, University of Illinois, Chicago, US 2Blavatnik School of Computer Science, Tel Aviv University, Israel 3Google Research, Tel Aviv, Israel. Correspondence to: Aadirupa Saha <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Projected Dueling Descent (PDD) ... Algorithm 2 Epoch-PDD |
| Open Source Code | No | The paper does not provide any statements or links regarding the availability of open-source code for the described methodology. |
| Open Datasets | No | The paper is theoretical, focusing on algorithm design and convergence analysis for convex optimization. It does not conduct empirical studies that would involve datasets. |
| Dataset Splits | No | The paper is theoretical and does not involve the use of datasets or experiments, therefore, there are no dataset splits mentioned. |
| Hardware Specification | No | The paper is theoretical and focuses on algorithm design and convergence proofs. It does not describe any experiments that would require specific hardware, thus no hardware specifications are provided. |
| Software Dependencies | No | The paper is theoretical and does not describe any experimental implementation. Therefore, no specific software dependencies with version numbers are mentioned. |
| Experiment Setup | No | The paper is theoretical and presents algorithms with convergence proofs. It does not describe any empirical experiments, and therefore, no experimental setup details, such as hyperparameters or system-level training settings, are provided. |