A sampling theory perspective on activations for implicit neural representations

Authors: Hemanth Saratchandran, Sameera Ramasinghe, Violetta Shevchenko, Alexander Long, Simon Lucey

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we aim to compare the performance of different INR activations. First, we focus on image and Ne RF reconstructions and later move on to dynamical systems.
Researcher Affiliation Collaboration 1University of Adelaide, Australia 2Amazon, Australia.
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not include any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes DIV2K dataset (Agustsson & Timofte, 2017)
Dataset Splits No The paper mentions training and testing but does not provide specific details on dataset splits (e.g., percentages or exact counts for train/validation/test).
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers.
Experiment Setup Yes We use 4-layer networks with 256 width for these experiments.