EigenGame: PCA as a Nash Equilibrium

Authors: Ian Gemp, Brian McWilliams, Claire Vernade, Thore Graepel

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the scalability of the algorithm with experiments on large image datasets and neural network activations.
Researcher Affiliation Industry Ian Gemp, Brian Mc Williams, Claire Vernade & Thore Graepel Deep Mind {imgemp,bmcw,vernade,thore}@google.com
Pseudocode Yes Algorithm 1 Eigen Game R-Sequential; Algorithm 2 Eigen Game R (Eigen Game update with ˆvi instead of R ˆvi)
Open Source Code No The paper does not contain an explicit statement that the source code for the methodology is being released, nor does it provide a link to a code repository.
Open Datasets Yes MNIST handwritten digits.; IMAGENET dataset
Dataset Splits No The paper mentions using 'training set' for MNIST and discusses 'held out runs' for synthetic data, but it does not provide specific percentages or counts for training, validation, and test splits, nor does it explicitly cite predefined splits for reproducibility.
Hardware Specification Yes Computing the top-32 principal components takes approximately nine hours on 32 TPUv3s.
Software Dependencies No The paper states, 'We implemented a data-and-model parallel version of Eigen Game in JAX (Bradbury et al., 2018),' but it does not provide a specific version number for JAX.
Experiment Setup Yes Learning rates were chosen from {10 3, . . . , 10 6} on 10 held out runs.; Sampling a mini-batch (of size 128)