Learning Transferable Features for Implicit Neural Representations

Authors: Kushal Kardam Vyas, Imtiaz Humayun, Aniket Dashpute, Richard Baraniuk, Ashok Veeraraghavan, Guha Balakrishnan

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically evaluate STRAINER in several ways. First, we test STRAINER on image fitting across several datasets including faces (Celeb A-HQ) and medical images (OASIS-MRI) and show (Figure 2) that STRAINER s learned features are indeed transferrable resulting in a +10d B gain in reconstruction quality compared to a vanilla SIREN model . We further assess the data-driven prior captured by STRAINER by evaluating it on inverse problems such as denoising and super resolution.
Researcher Affiliation Academia Kushal Vyas kushal.vyas@rice.edu Ahmed Imtiaz Humayun imtiaz@rice.edu Aniket Dashpute aniket.dashpute@rice.edu Richard G. Baraniuk richb@rice.edu Ashok Veeraraghavan vashok@rice.edu Guha Balakrishnan guha@rice.edu Rice University
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our implementation can be found on 1Google Colab. ... 1https://colab.research.google.com/drive/1f BZAwq E8C_lr RPAe-h QZJTWr MJu AKt G2?usp=sharing
Open Datasets Yes We mainly used the Celeb A-HQ [22], Animal Faces-HQ (AFHQ) [10], and OASIS-MRI [18, 28] images for our experiments.
Dataset Splits No We randomly divided Celeb A-HQ into 10 train images and 550 test images. For AFHQ, we used only the cat data, and used ten images for training and 368 images for testing. For OASIS-MRI, we used 10 of the (template-aligned) 2D raw MRI slices for training, and 144 for testing.
Hardware Specification Yes Further, we run the code on an Nvidia A100 GPU and report the time after averaging 3 such runs for each method.
Software Dependencies Yes Our implementation is written in Py Torch[33]... [33] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach De Vito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library, 2019.
Experiment Setup Yes In all experiments, we used the SIREN [39] MLP with 6 layers and sinusoid nonlinearities. ... We used the Adam optimizer with a learning rate of 10 4 for STRAINER s training and test-time evaluation, unless mentioned otherwise. ... We normalized all images between (0-1), and input coordinates between (-1,1).