Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Projection Efficient Subgradient Method and Optimal Nonsmooth Frank-Wolfe Method

Authors: Kiran K. Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh

NeurIPS 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally evaluate2 MOPES (Algorithm 1) and MOLES (Algorithm 2) methods on a low rank SVM problem [85] of the form (8) on a subset of the Imagewoof 2.0 dataset [43]. In Figure 1 we plot the mean (over 10 runs) sub-optimality gap: f(xk) ห†f , of the iterates against the number of PO (top) and FO (bottom) calls, respectively, used to obtain that iterate.
Researcher Affiliation Collaboration Kiran Koshy Thekumparampil University of Illinois at Urbana-Champaign EMAIL Prateek Jain Microsoft Research, India EMAIL Praneeth Netrapalli Microsoft Research, India EMAIL Sewoong Oh University of Washington, Seattle EMAIL
Pseudocode Yes Algorithm 1: MOPES: MOreau Projection Ef๏ฌcient Subgradient method
Open Source Code Yes 2Code for the experiments is available at https://github.com/tkkiran/Moreau Smoothing
Open Datasets Yes a subset of the Imagewoof 2.0 dataset [43].
Dataset Splits No The paper mentions 'The training data contains n = 400 samples' but does not provide specific details on train/validation/test splits, percentages, or methodology for splitting the dataset.
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies or library versions used for its experiments.
Experiment Setup No The paper specifies 'n = 400 samples' and 'r = 0.1 as nuclear norm ball radius of X' as problem parameters, but it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) for training the algorithms.