Structure-preserving GANs

Authors: Jeremiah Birrell, Markos Katsoulakis, Luc Rey-Bellet, Wei Zhu

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical experiments and ablation studies across a broad range of data sets, including real-world medical imaging, validate our theory, and show our proposed methods achieve significantly improved sample fidelity and diversity almost an order of magnitude measured in Fr echet Inception Distance especially in the small data regime.
Researcher Affiliation Academia 1Department of Mathematics and Statistics, University of Massachusetts Amherst, Amherst, MA 01003, USA.
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not explicitly state that source code for the described methodology is publicly available, nor does it provide a link to a code repository.
Open Datasets Yes Rot MNIST is built by randomly rotating the original 10-class 28 28 MNIST digits (Le Cun et al., 1998), resulting in an SO(2)-invariant distribution. We use different portions of the 60,000 training images for experiments in Section 5.4.
Dataset Splits Yes We use different portions of the 60,000 training images for experiments in Section 5.4. (e.g., 1% (600) training samples mentioned in Figure 2 and Table 1, implying a split for training, though explicit validation/test splits are not detailed as percentages/counts. However, standard datasets like MNIST often have predefined splits, and the tables show results on 'training samples' which implies the rest is for evaluation).
Hardware Specification No The paper mentions "high performance computing equipment obtained under a grant from the Collaborative R&D Fund managed by the Massachusetts Technology Collaborative" but does not specify any particular hardware components (e.g., GPU/CPU models, memory).
Software Dependencies Yes All models are trained using the Adam optimizer (Kingma & Ba, 2014) with β1 = 0.0 and β2 = 0.9 (Zhang et al., 2019).
Experiment Setup Yes All models are trained for 40,000 generator iterations with a batch size of 32. The learning rates were set to ηG = 0.0001 and ηD = 0.0004 respectively.