Dynamic Game Theoretic Neural Optimizer

Authors: Guan-Horng Liu, Tianrong Chen, Evangelos Theodorou

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5. Experiment, Table 2. Accuracy (%) of residual-based networks (averaged over 6 random seeds), Table 3. Accuracy (%) of inception-based networks (averaged over 4 random seeds), Figure 7. Our second-order method DGNOpt exhibits similar runtime (≈ 40%) and memory (≈ 30%) complexity compared to the second-order baseline EKFAC.
Researcher Affiliation Academia 1Center for Machine Learning 2School of Aerospace Engineering 3School of Electrical and Computer Engineering, Georgia Institute of Technology, USA.
Pseudocode Yes Algorithm 1 Dynamic Game Theoretic Neural Optimizer
Open Source Code No The paper does not provide any explicit statements about releasing source code, nor does it include a link to a code repository.
Open Datasets Yes Datasets and networks. We verify the performance of DGNOpt on image classification datasets... Specifically, we first consider residual-based networks... For larger datasets such as CIFAR10/100... For MNIST and SVHN...
Dataset Splits No The paper mentions common datasets like MNIST, SVHN, CIFAR10, and CIFAR100, but does not explicitly provide specific training/validation/test split percentages, sample counts, or explicit references to predefined standard splits within the main text.
Hardware Specification No The paper does not provide specific details regarding the hardware used for running experiments, such as GPU or CPU models, or cloud computing specifications.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the implementation or experimentation.
Experiment Setup Yes All networks use Re LU activation and are trained with 128 batch size. Other setups are detailed in Appendix E.