Functional Space Analysis of Local GAN Convergence

Authors: Valentin Khrulkov, Artem Babenko, Ivan Oseledets

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments are organized as follows. We start by numerically investigating the value of min and the impact of formulas from Section 6 on the convergence on synthetic datasets. We then study the more practical CIFAR-10 (Krizhevsky et al., 2009) dataset. Firstly, we show the correlation between the min obtained for various augmented versions of the dataset and FID values obtained for the corresponding GAN.
Researcher Affiliation Collaboration Valentin Khrulkov 1 Artem Babenko 1 2 Ivan Oseledets 3 1Yandex, Russia 2National Research University Higher School of Economics, Moscow, Russia 3Skolkovo Institute of Science and Technology, Moscow, Russia.
Pseudocode No The paper describes methods mathematically and in prose, but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper states: 'For the synthetic datasets, our experiments were performed in JAX (Bradbury et al., 2018) using the ODE-GAN code available at Git Hub1.' (Footnote 1: https://github.com/deepmind/deepmind-research/tree/master/ode_gan). This refers to a third-party library/codebase that the authors *used* for their experiments, not open-source code released by the authors for *their own* described methodology (the functional space analysis or the derivations).
Open Datasets Yes We then study the more practical CIFAR-10 (Krizhevsky et al., 2009) dataset.
Dataset Splits No The paper mentions 'train part of the dataset' and 'augmented test set' but does not specify exact percentages, sample counts, or explicit validation splits for dataset partitioning.
Hardware Specification No No specific hardware details such as GPU models, CPU types, or memory amounts used for running experiments are mentioned in the paper.
Software Dependencies No The paper states 'experiments were performed in JAX (Bradbury et al., 2018)' and 'For CIFAR-10 experiments we utilized Py Torch (Paszke et al., 2019)'. While software names are given with citations, specific version numbers for these software dependencies are not provided in the main text, which is required for reproducibility.
Experiment Setup Yes For training, we use the batch size of 256 and Adam optimizer with a learning rate 10^-4 (results are not sensitive to these parameters).