Adjustable Real-time Style Transfer

Authors: Mohammad Babaeizadeh, Golnaz Ghiasi

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our quantitative and qualitative experiments indicate how adjusting these parameters is comparable to retraining the model with different hyper-parameters.
Researcher Affiliation Industry Mohammad Babaeizadeh1 and Golnaz Ghiasi1 1Google Brain
Pseudocode No The paper provides network architectures in tables but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The source-code of the project is available at the project website: https://goo.gl/PVWQ9K.
Open Datasets Yes We trained our model on Image Net (Deng et al., 2009) as content images while using paintings from Kaggle Painter by Numbers (Kaggle) and textures from Descibable Texture Dataset (Cimpoi et al., 2014) as style images.
Dataset Splits No The paper mentions training on Image Net and using specific datasets as test images (Image Net test set, MS-COCO, Celeb A) but does not provide explicit training/validation/test dataset splits or mention a specific validation set.
Hardware Specification Yes Our implementation can process 47.5 fps on a NVIDIA Ge Force 1080
Software Dependencies No The paper mentions general optimizer (Adam) and activation functions (ReLU, Sigmoid) but does not list specific software libraries or their version numbers used in the implementation.
Experiment Setup Yes Optimizer Adam (α = 0.001, β1 = 0.9, β2 = 0.999) Training iterations 200K Batch size 8 Weight initialization Isotropic gaussian (µ = 0, σ = 0.01)