Signal Processing for Implicit Neural Representations

Authors: Dejia Xu, Peihao Wang, Yifan Jiang, Zhiwen Fan, Zhangyang Wang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate the proposed INSP framework on several challenging tasks, using different combinations of . First, we build low-level image processing filters using either hand-crafted or learnable . Then, we construct convolutional neural networks with our INSP-Conv Net framework and validate its performance on image classification. More results and implementation details are provided in the Appendix E F.
Researcher Affiliation Academia Dejia Xu dejia@utexas.edu Peihao Wang peihaowang@utexas.edu Yifan Jiang yifanjiang97@utexas.edu Zhiwen Fan zhiwenfan@utexas.edu Zhangyang Wang atlaswang@utexas.edu The University of Texas at Austin
Pseudocode No The paper describes the proposed frameworks and their operations but does not include any formal pseudocode or algorithm blocks.
Open Source Code Yes The University of Texas at Austin https://vita-group.github.io/INSP/
Open Datasets Yes For low-level image processing, we operate on natural images from Set5 dataset [75], Set14 dataset [76], and DIV-2k dataset [77]. Originally designed for super-resolution, the images are diverse in style and content. Note that the unprocessed images presented in figures are the images decoded from unprocessed INRs. Since our method operates directly on INRs, we firstly fit the images with INRs and then feed the INRs into our framework. The final output is another INR which can be decoded into desired images. The training set of our method consists of 90 examples of INRs, where each INR is built on SIREN [28] architectures.
Dataset Splits No The paper mentions using well-known datasets (MNIST, CIFAR-10, Set5, Set14, DIV-2k) and a training set size for INRs (90 examples) but does not provide specific train/validation/test split percentages, sample counts, or references to predefined standard splits for all experiments.
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper mentions 'PyTorch [35]' and 'Adam W optimizer [88]' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes Both experiments take 1000 epochs to optimize with Adam W optimizer [88] and a learning rate of 10 4.