Conditional gradient methods for stochastically constrained convex minimization

Authors: Maria-Luiza Vladarean, Ahmet Alacaoglu, Ya-Ping Hsieh, Volkan Cevher

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Preliminary numerical experiments are provided for illustrating the prac tical performance of the methods. and section 5. Numerical Experiments
Researcher Affiliation Academia 1 Ecole Polytechnique ed de Lausanne, Switzer F erale land. Correspondence to: Maria-Luiza Vladarean <maria-luiza.vladarean@epfl.ch>.
Pseudocode Yes Algorithm 1 H-1SFW and Algorithm 2 H-SPIDER-FW
Open Source Code No No explicit statement or link for open-source code release for the described methodology was found.
Open Datasets Yes In order to compare against existing work, we adopt the MNIST dataset (k = 10) (Le Cun & Cortes, 2010) and We run our algorithms on three graphs of different sizes from the Network Repository dataset (Rossi & Ahmed, 2015)
Dataset Splits No The paper uses well-known datasets but does not explicitly state the specific train/validation/test splits used for its experiments.
Hardware Specification No No specific hardware details (like GPU/CPU models or memory) used for running experiments were mentioned.
Software Dependencies No No specific software dependencies with version numbers were mentioned.
Experiment Setup Yes For a fair comparison, we sweep the parameter β0 for the three algorithms in the range [1e-7, 1e1]. We settle for 1e-7, 1e-7 and 1e-5 for SHCGM, H-1SFW and H SPIDER-FW, respectively. For H-1SFW and SHCGM, we choose the batch size to be 1% of the data. and The batchsize for H-1SFW and SHCGM is set to 5%.