Delving into the Convergence of Generalized Smooth Minimax Optimization

Authors: Wenhan Xian, Ziyi Chen, Heng Huang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also conduct a numerical experiment to validate the performance of our proposed algorithms. We conduct a numerical experiment of robust logistic regression task to validate the practical performance of our method. In this section, we will conduct an experiment of the robust logistic regression task to validate the performance of our generalized minimax optimization methods with the suitable stepsize strategy.
Researcher Affiliation Academia Wenhan Xian 1 Ziyi Chen 1 Heng Huang 1 1Department of Computer Science, University of Maryland, College Park, MD, United States. Correspondence to: Wenhan Xian <wxian1@umd.edu>, Ziyi Chen <zc286@umd.edu>, Heng Huang <heng@umd.edu>.
Pseudocode Yes Algorithm 1 Generalized GDA or SGDA. Algorithm 2 Generalized GDmax or SGDmax.
Open Source Code Yes The code is available at https://github. com/WH-XIAN/AS-SGDA.
Open Datasets Yes We run the experiment and verify our method on 9 real-world datasets a9a, covtype, diabetes, german, gisette, ijcnn1, mushrooms, phishing, and w8a, which can be downloaded from the LIBSVM repository at https://www.csie.ntu.edu.tw/ cjlin/ libsvmtools/datasets. The description of these datasets is listed in Table 2.
Dataset Splits No The paper mentions using 9 real-world datasets for experiments but does not provide specific details on how these datasets were split into training, validation, or test sets (e.g., percentages, sample counts, or explicit standard splits).
Hardware Specification No The paper does not provide any specific details regarding the hardware (e.g., CPU, GPU models, memory) used to conduct the experiments.
Software Dependencies No The paper mentions using datasets from the LIBSVM repository but does not specify any software dependencies (e.g., libraries, frameworks, or programming language versions) used for its implementation or experiments.
Experiment Setup Yes The mini-batch size is set to 50. For each algorithm, we choose the best learning rates η and ηy from {0.1, 0.01, 0.001, 0.0001, 1e 5, 1e 6} by grid search. Following the experimental settings in (Yan et al., 2019), we set λ1 = 1 n2 , λ2 = 0.001 and α = 10 in our experiment.