Generalized rectifier wavelet covariance models for texture synthesis

Authors: Antoine Brochard, Sixin Zhang, Stéphane Mallat

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Section 4 shows synthesis results of our model, compared with state-of-the-art models. Additionally, the code is made publicly available.
Researcher Affiliation Academia Antoine Brochard ENS, PSL University, Paris, France antoine.brochard@ens.fr Sixin Zhang Universit e de Toulouse, INP, IRIT, Toulouse, France sixin.zhang@irit.fr St ephane Mallat Coll ege de France, Paris, France Flatiron Institute, New York, USA
Pseudocode No The paper describes algorithmic parameters but does not present them in a structured pseudocode or algorithm block format.
Open Source Code Yes All calculations can be reproduced by a Python software available at https://github.com/ abrochar/wavelet-texture-synthesis.
Open Datasets Yes The source for the presented textures are given in Appendix A. Our natural texture examples were obtained from the following three sources: CNS NYU8, Textures.com9, Describable Textures Dataset model10 and the Github page of Berger & Memisevic (2017) 11.
Dataset Splits No The paper describes a texture synthesis process from a single observed image, not a traditional machine learning training/validation/test split on a large dataset. No specific dataset splits are provided for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No The paper mentions 'Python software', 'Matlab software', 'Lasagne', 'Pytorch', and 'scipy.optimize' functions but does not specify version numbers for any of these software dependencies.
Experiment Setup Yes For all the ALPHA models, we use Morlet wavelets with a maximal scale J = 5 and number of orientations L = 4. To draw the samples, we follow gradient-based sampling algorithms... we use the LBFGS algorithm (Nocedal, 1980) for the optimization of the objective. ... We use the L-BFGS procedure implemented in Pytorch. It runs for 500 iterations and then it is restarted with an initialization obtained from the previous L-BFGS result. This is repeated 10 times to obtain the synthesis (with an additional histogram matching post-processing).