GAN "Steerability" without optimization

Authors: Nurit Spingarn, Ron Banner, Tomer Michaeli

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate results mainly on Big GAN, which is class-conditional, but our trajectories are class-agnostic. Our approach is advantageous over existing methods in several respects. First, it is 104 -105 faster. Second, it seems to detect more semantic directions than other methods. And third, it allows explicitly accounting for dataset biases. ... Figure 4 shows the distributions of areas and centers of object bounding boxes in the transformed images. As can be seen, our trajectories lead to similar effects to those of Jahanian et al. (2020), despite being 104 faster to compute (see Tab. 1).
Researcher Affiliation Collaboration Nurit Spingarn Eliezer Ron Banner Tomer Michaeli Technion Israel Institute of Technology , Habana Labs Intel {nurits@campus,tomer.m@ee}.technion.ac.il, ron.banner@intel.com
Pseudocode No No pseudocode or algorithm block found. The methods are described through mathematical formulations.
Open Source Code No The paper provides GitHub links for cited works (Jahanian et al. (2020) and Härkönen et al. (2020)) in the quantitative evaluation section of the appendix, but does not explicitly state that their own code is open-source or provide a link to it.
Open Datasets Yes We illustrate results mainly on Big GAN... see additional results with Big GAN and with the DCGAN architecture of (Miyato et al., 2018) in App. A.3. ... We used 100 randomly chosen classes from the Image Net dataset, and 30k images from each class.
Dataset Splits No The paper uses pre-trained GANs and evaluates transformation effects, but does not perform new model training that would involve specifying training, validation, or testing dataset splits for its own methodology.
Hardware Specification No The paper does not provide specific details on the hardware (e.g., GPU models, CPU types, memory) used to run the experiments. It only mentions computational time for various methods in Table 1.
Software Dependencies No The paper mentions using the Mobile Net-SSD-V1 detector and refers to Big GAN and DCGAN architectures, but does not provide specific version numbers for any software libraries or dependencies used in their implementation.
Experiment Setup No The paper describes mathematical derivations for calculating steering directions (e.g., formulas for M and q) and mentions parameters like 'step size' (e.g., multiplying the steering vector q by some α > 0) or 'N steps' for walks, but it does not detail typical experimental setup elements such as learning rates, batch sizes, or optimizer configurations, as it focuses on analysis of pre-trained GANs rather than training new models.