Neural FFTs for Universal Texture Image Synthesis
Authors: Morteza Mardani, Guilin Liu, Aysegul Dundar, Shiqiu Liu, Andrew Tao, Bryan Catanzaro
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive evaluations confirm that our method achieves state-of-the-art performance both quantitatively and qualitatively. (Abstract) |
| Researcher Affiliation | Industry | Morteza Mardani , Guilin Liu , Aysegul Dundar, Shiqiu Liu, Andrew Tao, Bryan Catanzaro NVIDIA {mmardani,guilinl,adundar,edliu,atao,bcatanzaro}@nvidia.com |
| Pseudocode | No | The paper describes the network architecture and training process in textual descriptions and diagrams, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing its source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | A large texture dataset with 55, 583 images from 15 different sources [8, 53, 9, 6, 7, 47, 1, 15, 45, 32] are collected. (Section 5) |
| Dataset Splits | Yes | The dataset is randomly split into a training set of 49, 583 images, a validation set of 1, 000 images, and a test set of 5, 000 images. (Section 5) |
| Hardware Specification | Yes | The model was trained on 4 DGX-1 stations with 32 total NVIDIA Tesla V100 GPUs and 320 CPUs using synchronized batch normalization layers [25]. (Section 5.1) |
| Software Dependencies | No | The paper mentions 'Pytorch interface with cu DNN' but does not specify version numbers for either software component. |
| Experiment Setup | Yes | We choose batch size of 8 per GPU, and the initial learning rate 10 5 that is halved every 200 epochs. Total of 800 epochs are used for convergence. We also set λvgg = 0.1, λstyle = 200, λadv = 0.1. (Section 5.1) |