Projection Efficient Subgradient Method and Optimal Nonsmooth Frank-Wolfe Method

Authors: Kiran K. Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally evaluate2 MOPES (Algorithm 1) and MOLES (Algorithm 2) methods on a low rank SVM problem [85] of the form (8) on a subset of the Imagewoof 2.0 dataset [43]. In Figure 1 we plot the mean (over 10 runs) sub-optimality gap: f(xk) ˆf , of the iterates against the number of PO (top) and FO (bottom) calls, respectively, used to obtain that iterate.
Researcher Affiliation Collaboration Kiran Koshy Thekumparampil University of Illinois at Urbana-Champaign thekump2@illinois.edu Prateek Jain Microsoft Research, India prajain@microsoft.com Praneeth Netrapalli Microsoft Research, India praneeth@microsoft.com Sewoong Oh University of Washington, Seattle sewoong@cs.washington.edu
Pseudocode Yes Algorithm 1: MOPES: MOreau Projection Efficient Subgradient method
Open Source Code Yes 2Code for the experiments is available at https://github.com/tkkiran/Moreau Smoothing
Open Datasets Yes a subset of the Imagewoof 2.0 dataset [43].
Dataset Splits No The paper mentions 'The training data contains n = 400 samples' but does not provide specific details on train/validation/test splits, percentages, or methodology for splitting the dataset.
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies or library versions used for its experiments.
Experiment Setup No The paper specifies 'n = 400 samples' and 'r = 0.1 as nuclear norm ball radius of X' as problem parameters, but it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) for training the algorithms.