Idempotent Generative Network

Authors: Assaf Shocher, Amil V Dravid, Yossi Gandelsman, Inbar Mosseri, Michael Rubinstein, Alexei A Efros

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate IGN on MNIST (Deng, 2012), a dataset of grayscale handwritten digits, and Celeb A (Liu et al., 2015), a dataset of face images. We use image resolutions of 28 × 28 and 64 × 64 respectively. We report FID=39 (DCGAN FID=34).
Researcher Affiliation Collaboration Assaf Shocher1,2 Amil Dravid1 Yossi Gandelsman1 Inbar Mosseri2 Michael Rubinstein2 Alexei A. Efros1 1UC Berkeley 2Google Research
Pseudocode Yes Source Code 1: IGN training routine (Py Torch)
Open Source Code No The paper states 'In sourcecode. 2.2 we provide the basic training Py Torch code for IGN.' This refers to code provided within the paper itself (Source Code 1), not an external or publicly accessible repository for the source code.
Open Datasets Yes We evaluate IGN on MNIST (Deng, 2012), a dataset of grayscale handwritten digits, and Celeb A (Liu et al., 2015), a dataset of face images.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, or citations to predefined splits) needed to reproduce the data partitioning for training, validation, and testing.
Hardware Specification No The paper mentions 'Batch size 256' and '# GPUs 8' in Table 1 but does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running its experiments.
Software Dependencies No The paper mentions 'Py Torch' in Source Code 1 and 'Adam' as an optimizer in Table 1, but it does not provide specific version numbers for these or any other ancillary software components.
Experiment Setup Yes The training and network hyperparameters are presented in Table 1. Table 1 provides details such as 'Optimizer Adam (α = 0.0001, β1 = 0.5, β2 = 0.999)', 'Batch size 256', 'Iterations 1000', 'Leaky Re LU slope 0.2', 'Weight, bias initialization Isotropic gaussian (µ = 0, σ = 0.02), Constant(0)', 'Loss metric D L1: D(y1, y2) = ||y1 y2||1', and 'Loss terms weights λr = 20, λi = 20, λt = 2.5'.