Polynomial Neural Fields for Subband Decomposition and Manipulation

Authors: Guandao Yang, Sagie Benaim, Varun Jampani, Kyle Genova, Jonathan Barron, Thomas Funkhouser, Bharath Hariharan, Serge Belongie

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the applicability of our framework along three different axes: (Expressivity) For a number of different signal types, we demonstrate our ability to fit a given signal. We observe that our method enjoys faster convergence in terms of the number of iterations. (Interpretability) We visually demonstrate our learned PNF, whose outputs are localized in the frequency domain with both upper band-limit and lower-band limit. (Decomposition) Finally, we demonstrate the ability to control a PNF on the tasks of texture transfer and scale-space representation, based on our subband decomposition. For all presented experiments, full training details and additional qualitative and quantitative results are provided in the supplementary.
Researcher Affiliation Collaboration Guandao Yang Cornell University Sagie Benaim* University of Copenhagen Varun Jampani Google Research Kyle Genova Google Research Jonathan T. Barron Google Research Thomas Funkhouser Google Research Bharath Hariharan Cornell University Serge Belongie University of Copenhagen
Pseudocode No The paper includes illustrations of network architecture and mathematical equations (e.g., Figure 3, Equations 4, 5) but does not provide structured pseudocode or an algorithm block.
Open Source Code Yes Code is available at https://github.com/stevenygd/PNF.
Open Datasets Yes Following BACON [32], we train a PNF and the baselines to fit images from the DIV2K [2] dataset.
Dataset Splits No The paper describes training and testing procedures for different tasks (e.g., 'During training, images are downsampled to 256^2. All networks are trained for 5000 iterations. At test time, we sample the fields at 512^2'), but it does not provide explicit training, validation, and test dataset splits with percentages or sample counts in the main text.
Hardware Specification No The paper mentions 'Experiments are supported in part by Google Cloud Platform and GPUs donated by NVIDIA,' but does not specify exact GPU models, CPU types, or detailed cloud instance specifications used for the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers, such as programming languages, libraries, or frameworks (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes During training, images are downsampled to 256^2. All networks are trained for 5000 iterations.