Tight Bounds for Approximate Carathéodory and Beyond
Authors: Vahab Mirrokni, Renato Paes Leme, Adrian Vladu, Sam Chiu-wai Wong
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate the performance of our algorithm in two numerical experiments, presented in the figure below. We ran both the original sampling algorithm (Barman, 2015; Pisier, 1980) (where vertices are sampled from an exact convex combination) and our deterministic mirror-descent based algorithm on 100 instances. |
| Researcher Affiliation | Collaboration | 1Google Research, New York, NY, USA 2MIT, Cambridge, MA, USA 3UC Berkeley, Berkeley, CA, USA. |
| Pseudocode | Yes | zt+1 = zt rf(yt) yt+1 = r! (zt+1) (MD) |
| Open Source Code | No | The paper does not contain any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper states that instances were "obtained by sampling a 1000 1000 Gaussian matrix, then scaling each column by the maximum 2 (respectively 8) column norm," indicating synthetically generated data rather than a publicly accessible dataset with concrete access information. |
| Dataset Splits | No | The paper mentions running experiments on "100 instances" but does not specify how data was split into training, validation, or test sets with percentages, sample counts, or predefined citations. |
| Hardware Specification | No | The paper does not specify any hardware details like GPU/CPU models, processor types, or memory used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., library or solver names with versions like Python 3.8, CPLEX 12.4) needed to replicate the experiment. |
| Experiment Setup | No | The paper describes how the input data instances were generated (e.g., "sampling a 1000 1000 Gaussian matrix"), but it does not provide specific experimental setup details such as hyperparameter values (learning rates, batch sizes) or optimizer settings. |