Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Decoupled SGDA for Games with Intermittent Strategy Communication
Authors: Ali Zindari, Parham Yazdkhasti, Anton Rodomanov, Tatjana Chavdarova, Sebastian U Stich
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through numerical experiments, we validate the practical benefits of Decoupled SGDA in non-convex GAN training, federated learning with imbalanced noise, and in weakly coupled quadratic minimax games, showcasing its versatility. Additionally, Decoupled SGDA outperforms federated minimax approaches in noisy, imbalanced settings. |
| Researcher Affiliation | Academia | 1CISPA Helmholtz Center for Information Security, Saarbrücken, Germany 2Politecnico di Milano, Italy. Correspondence to: Ali Zindari <EMAIL>, Parham Yazdkhasti <EMAIL>, Anton Rodomanov <EMAIL>, Tatjana Chavdarova <EMAIL>, Sebastian U. Stich <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Decoupled SGDA for two-playera games (main body). Algorithm 2 Decoupled SGD for N-player games (Appendix C). Algorithm 3 Decoupled SGDA for two-player federated minimax games (Appendix F.2). Algorithm 4 Decoupled SGDA with Ghost Sequence (Appendix G). |
| Open Source Code | No | The paper does not provide an explicit statement about releasing the source code for the methodology described, nor does it include a link to a code repository. |
| Open Datasets | Yes | Comparison of FID vs Communication Rounds (Dataset: svhn). Comparison of FID vs Communication Rounds (Dataset: cifar10). In Section I.4, it is stated: 'a Generative Adversarial Network (GAN) was trained using the CIFAR-10 and SVHN datasets'. The paper also provides citations for these well-known public datasets: Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images. Master s thesis, 2009. and Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Y. Ng, A. Reading digits in natural images with unsupervised feature learning. 2011. URL http:// ufldl.stanford.edu/housenumbers/. |
| Dataset Splits | Yes | The paper mentions using 'CIFAR-10' and 'SVHN datasets' in Section 5.3 and I.4. These are standard benchmark datasets with well-defined splits, commonly used in machine learning research. |
| Hardware Specification | Yes | Training was conducted using CUDA on an NVIDIA L4 GPU. |
| Software Dependencies | No | The paper mentions training was conducted using CUDA, but does not provide specific version numbers for software dependencies such as CUDA, PyTorch, or Python libraries. |
| Experiment Setup | Yes | In Section I.4, the paper states: 'The GAN was trained with a learning rate of 1 ˆ 10 4, a batch size of 256, and 50,000 rounds of updates. The hidden dimension size for the generator was 128. ... Both the generator and discriminator were optimized using the Adam optimizer, with a learning rate scheduler that decayed by a factor of 0.95 every 1000 steps. Additionally, a gradient penalty term was applied to stabilize training. The generator s latent space dimension was set to 100, and its Exponential Moving Average (EMA) was maintained with a decay factor of 0.999 for evaluation purposes.' |