What does it take to generate natural textures?

Authors: Ivan Ustyuzhaninov *, Wieland Brendel *, Leon Gatys, Matthias Bethge

ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test and compare a wide range of single-layer architectures with different filter-sizes and different types of filters (random, hand-crafted and unsupervisedly learnt filters) against the state-of-the-art texture model by Gatys et al. (2015a). We utilize a quantitative texture quality measure based on the synthesis loss in the VGGbased model (Gatys et al., 2015a) to replace the common-place evaluation of texture models through qualitative human inspection.
Researcher Affiliation Academia 1Centre for Integrative Neuroscience, University of Tübingen, Germany 2Bernstein Center for Computational Neuroscience, Tübingen, Germany 3Graduate School of Neural Information Processing, University of Tübingen, Germany 4Max Planck Institute for Biological Cybernetics, Tübingen, Germany
Pseudocode No The paper describes the algorithms and models used in text and mathematical formulas, but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper states: 'The networks were implemented in Lasagne (Dieleman et al., 2015; Theano Development Team, 2016)'. Footnote 1 provides a URL to a VGG19 model in a Lasagne recipe, but this is a third-party model and not an explicit release of the authors' own code for their method.
Open Datasets Yes We randomly sample and whiten 1e7 patches of size 11 11 from the Imagenet dataset (Russakovsky et al., 2015)
Dataset Splits No The paper mentions using the ImageNet dataset but does not explicitly specify any training, validation, or test dataset splits or cross-validation setup.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory used for running the experiments.
Software Dependencies No The paper mentions that 'The networks were implemented in Lasagne (Dieleman et al., 2015; Theano Development Team, 2016)' but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes We leave all parameters of the optimization algorithm at their default value except for the maximum number of iterations (2000), and add a box constraints with range [0, 1]. In addition, we scale the loss and the gradients by a factor of 107 in order to avoid early stopping of the optimization algorithm.