Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Online Min-max Problems with Non-convexity and Non-stationarity
Authors: Yu Huang, Yuan Cheng, Yingbin Liang, Longbo Huang
TMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the efficiency of the proposed TSODA algorithm and verify the theoretical results through numerical simulations. We consider the min-max problem of training an empirical Wasserstein robustness model (WRM) (Sinha et al., 2017). The real-world datasets we consider are MNIST (Deng, 2012) and Fashion MNIST (Xiao et al., 2017), each containing 60k samples. We simulate the online WRM model as follows. We randomly split the given dataset into T pieces {Dt}T t=1, and the learner sequentially receives Dt. |
| Researcher Affiliation | Academia | Yu Huang EMAIL Institute for Interdisciplinary Information Sciences Tsinghua University Yuan Cheng EMAIL University of Science and Technology of China Yingbin Liang EMAIL Department of Electrical and Computer Engineering The Ohio State University Longbo Huang EMAIL Institute for Interdisciplinary Information Sciences Tsinghua University |
| Pseudocode | Yes | Algorithm 1 Time-Smoothed Online Gradient Descent Ascent (TSODA) Input: window size w 1, stepsizes (ηx, ηy), tolerance δ > 0 Initialization: (x1, y1) 1: for t = 1 to T do 2: Predict (xt, yt). Observe the cost function ft : Rm Rn R 3: Set (xt+1, yt+1) (xt, yt) 4: repeat 5: xt+1 xt+1 ηx x Ft,w (xt+1, yt+1) 6: yt+1 PY (yt+1 + ηy y Ft,w (xt+1, yt+1)) 7: until Equation (7) in Stop Condition 1 holds 8: end for |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that source code for the methodology is openly available or will be released. |
| Open Datasets | Yes | The real-world datasets we consider are MNIST (Deng, 2012) and Fashion MNIST (Xiao et al., 2017), each containing 60k samples. |
| Dataset Splits | Yes | We randomly split the given dataset into T pieces {Dt}T t=1, and the learner sequentially receives Dt. At each round t, ft(x, y) = L(x, y; Dt). We choose T = 100 for the online setting. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | We choose T = 100 for the online setting. The network architecture mainly follows Sinha et al. (2017), which consists of three convolution blocks with filters of size 8 8, 6 6 and 5 5 respectively activated by ELU function (Clevert et al., 2015), then followed by a fully connected layer and softmax output. Furthermore, we set the adversarial perturbation γ {0.4, 1.3}, which is consistent with Sinha et al. (2017). Typically, the stepsizes for GDmax are chosen to be equal, i.e. ηx = ηy. |