Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Differentiable Game Mechanics
Authors: Alistair Letcher, David Balduzzi, Sébastien Racanière, James Martens, Jakob Foerster, Karl Tuyls, Thore Graepel
JMLR 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Basic experiments show SGA is competitive with recently proposed algorithms for finding stable fixed points in GANs while at the same time being applicable to, and having guarantees in, much more general cases. We investigate the empirical performance of SGA in four basic experiments. |
| Researcher Affiliation | Collaboration | Alistair Letcher EMAIL University of Oxford, David Balduzzi EMAIL Deep Mind |
| Pseudocode | Yes | Appendix A. Tensor Flow Code to Compute SGA Source code is available at https://github.com/deepmind/symplectic-gradient-adjustment. Since computing the symplectic adjustment is quite simple, we include an explicit description here for completeness. The code requires a list of n losses, Ls, and a list of variables for the n players, xs. The function fwd gradients which implements forward mode auto-differentiation is in the module tf.contrib.kfac.utils. ... def jac vec(ys, xs, vs) : return fwd gradients(ys, xs, grad xs = vs, stop gradients = xs) |
| Open Source Code | Yes | Source code is available at https://github.com/deepmind/symplectic-gradient-adjustment. |
| Open Datasets | No | Learning a two-dimensional mixture of Gaussians Data is sampled from a highly multimodal distribution designed to probe the tendency of GANs to collapse onto a subset of modes during training. The distribution is a mixture of 16 Gaussians arranged in a 4 4 grid. ... Learning a high-dimensional unimodal Gaussian Santurkar et al demonstrate boundary distortion using data sampled from a 75-dimensional unimodal Gaussian with spherical covariate matrix. The information is insufficient as no direct link or repository for the datasets is provided; only descriptions of the synthetic data generation process. |
| Dataset Splits | No | The paper uses synthetic datasets (mixture of 16 Gaussians, 75-dimensional spherical Gaussian) and does not specify any explicit training, validation, or test splits for them. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for running its experiments. |
| Software Dependencies | No | Appendix A. Tensor Flow Code to Compute SGA mentions "tf.gradients" and "fwd_gradients" from "tf.contrib.kfac.utils", indicating TensorFlow is used, but no specific version numbers are provided for any software components. |
| Experiment Setup | Yes | The generator and discriminator networks both have 6 ReLU layers of 384 neurons. The generator has two output neurons; the discriminator has one. The networks are trained under RMSProp. Learning rates were chosen by visual inspection of grid search results at iteration 8000. More precisely, grid search was over learning rates {1e-5, 2e-5,5e-5, 8e-5, 1e-4, 2e-4, 5e-4} and then a more refined linear search over [8e-5, 2e-4]. Figure 9 shows results after {2000, 4000, 6000, 8000} iterations. |