GramGAN: Deep 3D Texture Synthesis From 2D Exemplars
Authors: Tiziano Portenier, Siavash Arjomand Bigdeli, Orcun Goksel
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Quantitative and qualitative evaluations on a diverse set of exemplars motivate our design decisions and show that our system performs superior to previous state of the art. Finally, we conduct a user study that confirms the benefits of our framework. |
| Researcher Affiliation | Collaboration | Tiziano Portenier1, Siavash Bigdeli2, Orcun Goksel1 1: Computer-assisted Applications in Medicine, ETH Zurich, Switzerland 2: Swiss Center for Electronics and Microtechnology, Switzerland |
| Pseudocode | No | No explicit pseudocode or algorithm blocks are provided. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository for their methodology. |
| Open Datasets | No | For this purpose we trained both models on a texture dataset consisting of 100 stone texture exemplars collected online. No specific link, DOI, or formal citation is provided for this dataset. |
| Dataset Splits | No | The paper mentions training on a dataset and evaluating on '22 reference input patches not seen during training' but does not specify explicit train/validation/test dataset splits with percentages or counts. |
| Hardware Specification | Yes | Training our model on a single exemplar takes a few hours on a single NVIDIA 2080ti GPU |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | We train all models using Adam optimizer [28] and we use (equalized [29]) learning rates of 2 10 3 for D and 5 10 4 for G (E, Q, and S). In each training iteration both G and D are updated once and sequentially. In LD we set λ = 10. An important factor is the choice of the hyperparameters α and β in LG. Although various settings produce plausible outputs, we achieved best results when setting α = 0.1 and β = 1. In the single exemplar setting we set n = 16 and our conditional models use n = 32 noise frequencies. We train on texture patches of size 1282 and use noise instances of resolution 643n. |