Single-Loop Stochastic Algorithms for Difference of Max-Structured Weakly Convex Functions

Authors: Quanqi Hu, Qi Qi, Zhaosong Lu, Tianbao Yang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we conduct experiments on positive-unlabeled (PU) learning and partial area under ROC curve (p AUC) optimization with an adversarial fairness regularizer to validate the effectiveness of our proposed algorithms.
Researcher Affiliation Academia 1 Department of Computer Science & Engineering, Texas A&M University 2 Department of Computer Science, The University of Iowa 3 Department of Industrial and Systems Engineering, University of Minnesota
Pseudocode Yes Algorithm 1 Stochastic Moreau Envelope Approximate Gradient Method (SMAG) ... Algorithm 2 SMAG for DWC Optimization ... Algorithm 3 SMAG for WCSC Min-Max Optimization
Open Source Code Yes The code is included in the supplemental material.
Open Datasets Yes We use four multi-class classification datasets, Fashion-MNIST [36], MNIST [5] CIFAR10 [14] and FER2013 [6].
Dataset Splits Yes We divide the dataset into training, validation, and test data with an 80%/10%/10% split.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes For all datasets, we use a batch size of 64 and set πp = 0.5. We train 40 epochs and decay the learning rate by 10 at epoch 12 and 24. The learning rates of SGD, SDCA, SSDC-SPG and SSDC-Adagrad, the learning rate of the inner loop of SBCD (i.e., µηt/(µ + ηt)), and η1 in SMAG are all tuned from {10, 1, 0.2, 0.1, 0.01, 0.001}. The learning rate of the outer loop in SDCA and η0 in SMAG are tuned from {0.1, 0.5, 0.9}. The numbers of inner loops for all double-loop methods are tuned from {2, 5, 10}. The µ in SBCD, 1/γ in SSDC-SPG and SSDC-Adagrad, γ in SMAG are tuned in {0.05, 0.1, 0.2, 0.5, 1, 2}.