Images as Weight Matrices: Sequential Image Generation Through Synaptic Learning Rules

Authors: Kazuki Irie, Jürgen Schmidhuber

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We train our FPAs in the generative adversarial networks framework, and evaluate on various image datasets. We evaluate our model on six standard image generation datasets (Celeb A, LSUN-Church, Metfaces, AFHQ-Cat/Dog/Wild; all at the resolution of 64x64), and report both qualitative image quality as well as the commonly used Fr echet Inception Distance (FID) evaluation metric (Heusel et al., 2017).
Researcher Affiliation Academia Kazuki Irie1 J urgen Schmidhuber1,2 1The Swiss AI Lab, IDSIA, USI & SUPSI, Lugano, Switzerland 2AI Initiative, KAUST, Thuwal, Saudi Arabia
Pseudocode No The paper uses mathematical equations and describes sequences of operations but does not provide any explicitly labeled "Pseudocode" or "Algorithm" blocks.
Open Source Code Yes Our code is public.1 1https://github.com/IDSIA/fpainter
Open Datasets Yes We use six standard benchmark datasets for image generation: Celeb A (Liu et al., 2015), LSUN Church (Yu et al., 2015), Animal Faces HQ (AFHQ) Cat/Dog/Wild (Choi et al., 2020), and Met Faces (Karras et al., 2020a).
Dataset Splits No The paper computes FID scores every 5K training steps to monitor performance but does not specify explicit train/validation/test dataset splits by percentage or sample counts.
Hardware Specification Yes Any training run can be completed within one to three days on a single V100 GPU.
Software Dependencies No The paper mentions the use of specific implementations (e.g., "unofficial public Light GAN implementation", "official implementation of Style GAN3", "pytorch-fid implementation", "denoising-diffusion-pytorch"), but it does not specify version numbers for these software components or other general dependencies like Python or PyTorch.
Experiment Setup Yes The batch size and learning rate are fixed to 20 and 2e 4 respectively. We provide all hyper-parameters in Appendix B.2 and discuss training/generation speed in Appendix C.2. Table 3 summarises the corresponding results.