Conic Blackwell Algorithm: Parameter-Free Convex-Concave Saddle-Point Solving

Authors: Julien Grand-Clément, Christian Kroer

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present numerical experiments for solving matrix games and distributionally robust optimization problems. Our empirical results show that CBA+ is a simple algorithm that outperforms state-of-the-art methods on synthetic data and real data instances..." and "4 Numerical experiments In this section we investigate the practical performances of our algorithms on several instances of saddle-point problems.
Researcher Affiliation Academia Julien Grand-Clément ISOM Department HEC Paris grand-clement@hec.fr Christian Kroer IEOR Department Columbia University christian.kroer@columbia.edu
Pseudocode Yes Algorithm 1 Conic Blackwell Algorithm Plus (CBA+)
Open Source Code Yes The code for all experiments is available in the supplemental material.
Open Datasets Yes For the real classification instances, we use the following data sets from the libsvm website3: adult, australian, splice, madelon." and "3https://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/
Dataset Splits No The paper uses synthetic and real datasets but does not explicitly provide details on how the datasets were split into training, validation, or test sets, nor does it specify any cross-validation strategies.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers for its implementation (e.g., Python, PyTorch, TensorFlow versions, or specific solver versions).
Experiment Setup Yes We use linear averaging on decisions for all algorithms, and parameters x0 = 0, R = 10, y0 = (1, ..., 1) /m, λ = 1/2m in eq. (5)." and "We highlight this by showing the performance of all four algorithms, for various fixed step sizes η = α ηth, where α {1, 100, 1, 000, 10, 000} is a multiplier and ηth is the theoretical step size which guarantees the convergence of the algorithms for each instance.